How do I Use Text Analysis Reports?

Streamline insights with AI-driven text analysis to provide fast, meaningful insights from your participants’ journal entries and text question responses.

What’s different about Text Analysis Reports?

Instead of having to review every journal entry or open-text question response, these reports provide AI-created summaries and sentiment analysis, so you can quickly derive insight and share progress and success with your clients, without any downloads or manual analysis.

Where to find Text Analysis Reports

There are three reports that include AI-driven text analysis in the form of sentiment and summarization:

  • Module Report with Text Analysis
  • Session Report with Text Analysis
  • Journal Usage Report with Text Analysis

You’ll find these reports in the Reports page accessible from from your sidebar, and from any Reports tab in your Program, Module, or Session screen, in their own section. 

Journal Usage Report with Text Analysis (Beta)

Wading through journal entries to determine how your participants are feeling about their behavior change commitments can be an overwhelming task. The Journal Usage Report with Text Analysis makes it easy!

This report will show you:

  • the percentage of journal entries that show positive, negative, neutral, or mixed sentiment
  • how that sentiment breakdown shifts over the course of the commitment period
  • a summary of the content of journal entries within each sentiment group

You’ll find the sentiment in the new Journal Content Analysis section of the report.

 

Module and Session Reports with Text Analysis (Beta)

You can quickly surface key themes from your open-response questions. 

For each text question type you’ll see

  • a summary of all participant responses
  • a summary of participant responses for each commitment theme

What to expect with Reports with Text Analysis 

These reports are currently available in Beta while we continue to optimize the AI model to improve the accuracy and relevance of summaries. The best way to improve the model is for you to be using these reports and letting us know if you see anything unexpected. 


Text summaries will be generated when the category shown in the tile has at least 3 responses. When there are fewer than 3 responses you should see “3 or more responses are needed in order to provide a summary”. As the model is improving, you may see unexpected results in some tiles, such as summaries unrelated to your program. If you do see that, or anything else unexpected, here are some steps you can take:


  1. Clear Cache and Refresh on the affected tiles

2. Using the traditional version of the report, which shows all responses, compare the AI-generated summary to the content of the responses to verify if it makes sense.

3. Report the issue to the Actionable team through the platform Chat Bot so we can use the information to improve the OpenAI model.


While our regular embedded dashboards can be set up for scheduled delivery by email, this feature is not available for our AI-driven dashboards or summary tiles.

Data Privacy

We use OpenAI to analyze and summarize text. None of our data is being retained by OpenAI or used to train their models. Your program data can only be seen by your firm and you can only see your firm’s data. Seeing unexpected data in your responses indicates that OpenAI is filling in information.

We have built our text analysis structure to leave personally identifiable information out of your program text summaries however, since we do not review each summary, we can’t guarantee that the AI will get this right every time. We recommend you review your summaries before sharing them externally and re-generate the report if needed to create new summaries.