UpdateAI – Zoom meeting assistant

At UpdateAI, our mission is simple: to make meetings better so you can do better things.

And that’s never been more important than right now. As multiple back-to-back Zoom calls have become the norm for many customer success professionals (CSPs) and client relationship teams, it’s also become increasingly difficult to keep track of the key action items stemming from each meeting. That’s an issue, because when it comes to building client relationships, attention to detail is paramount.

But that’s why we’ve come in. UpdateAI, leveraging our revolutionary artificial intelligence technology, detects action items for you – allowing you to keep your focus on the customer and not on taking notes. The data doesn’t lie, either. UpdateAI outperforms competitors in action item detection by up to 180% in overall accuracy, based on a testing set of 25 meetings with 1,000 combined minutes of speech. I’ll go more into detail on how we accomplish our mission in a moment. First, though, we want to share the genesis of UpdateAI and how we started down this road of helping CSPs meet their challenges head-on.

How Our Focus on Action Items Began

UpdateAI’s mission is older than the company itself.

On Day 0, I, along with co-founder Bill Gross, had a clear priority – our goal was to free tribal working teams of noisy cross-functional communication. Complex and convoluted messaging, we both felt, only hampered productivity and made the task at hand more obscure. Our aim was to make it easy for team members to zoom in clearly on the big picture – and block out the nonsense.

(That’s why our action item detection model was dubbed “Hubble,” after NASA’s Hubble Space Telescope, due to its ability to hone in on details and provide a clear picture.)

Our years of experience working with teams helped us recognize three critical communication types: key decisions, action items, and open questions. All three of these are fixtures of meetings, but action items, in particular, stood apart.

Why? Because they impact both the providers and the clients. They’re something that needs to be done, yet they can fall through the cracks, since it’s difficult to keep track of action items while remaining fully present on calls. And compounding matters, we know customer success is under-resourced in many cases. This adds to fatigue and can often be attrition-inducing.

The “3 Ps” and Data Science

To meet this challenge, we’ve carefully structured our approach to creating industry-leading action item detection. This process is founded on the “3 Ps”: People, Process and Product, in that order.

Here’s what we mean by that:

People: Getting not only the best, but the right experts. This includes data scientists with considerable natural language processing and machine learning experience
Process: Taking an approach that is both academic, output driven, and utilizes our people
Product: Creating enablement tools and underlying services to help us build the best product.

Let’s dive into the 3 Ps more so you have a better idea of how they’ve helped UpdateAI craft the premiere action item detection technology on the market.

The Right People

First, to build our app, we knew it was imperative to assemble a team with a high-level skillset. And we’ve done that, bringing aboard PhD-level data scientists and a team of classically trained data annotators, including team members who have earned Master’s degrees in Linguistics. Our data-driven team is the foundation that our process and product are built upon.

UpdateAI Data Science Team Leadership

Data Science Lead
Sergey Kondratiuk

PhD in Machine Learning

Data Annotation Lead
Chenay Gladstone
Masters in Linguistics

UpdateAI’s Process

Like any difficult mission, we had to break down the problem we were solving into smaller parts. This allowed a straightforward process to be formulated and executed. Simply saying “we’re going to capture all action items” wasn’t enough – it only would have been a fruitless endeavor that created a lot of noise.

From the start, we wanted to avoid a generic, unimaginative approach; we didn’t want to become what I refer to as a “CTRL+F” solution, as in, just locating keywords in the transcript. That’s not enough to meet our users’ demands.

This led us on a journey to truly understand what constitutes an action item in common conversations – particularly ones with customers.

We started by looking at sample meeting transcripts that other startups gave us. Our annotation team went through those samples and labeled all of the action items. We then spearheaded our investigation into the linguistic patterns behind those action items. Here is a subset of those patterns that formed:

Action Items Pattern


Example: ”Will you send me that, please?”


Example: Speaker 1: I can make a ticket and show you where those are.”
Speaker 2: “Okay.”

Self-Assigned Task

Example: ”I need to go back and check my filters.”


Example: ”Sure.” or “Sounds Good”


Example: ”And if it makes it easier and you can get down to the $6 price, flat price across the board, we’ll see what we can do.”

From there, the objective was to refine our definitions and determine which type of action item patterns were the most prevalent, in order to focus on making sure those were correct.

According to our research, the self-assigned action items were the most common items by a large margin. This was very useful to know; training for this type of action item provided a “simple” groundwork on which to build our model, and also gave us a basis for evaluating more precise language patterns. These were named “sub-patterns.” For example, the “simplest” self-assigned sub-pattern we discovered was “1st person pronoun + will + [action verb]” (e.g. I will send the emails).

But we also discovered the simplest sub-pattern isn’t all-encompassing. There are times when the pattern is used but does not indicate an action item – these cases are what we call “anti-patterns.” An example of an anti-pattern in this instance would be if the speaker is describing a quality of state of being (e.g., “I will be fine”).

Sub Patterns and Anti Patterns of Self-Assigned Action Items


1st person pronoun + will + [action verb]

Sub-pattern example

I will send the emails.


1st person pronoun + will + be + adjective

Anti-pattern example

I will be fine.

Ultimately, the collection of these patterns, sub patterns, and anti-patterns formed our proprietary action item taxonomy.

Action Items Pattern

AI Type
AI Pattern
“I/we will + [action verb]”
“I/we need + [infinitive]”
“I/we have + [infinitive]”
“Make sure”
“Why don’t + pronoun”
“Let me/us”
Describing a Quality or State of Being
Explaining a Process
Presented with Negation
Past Tense
In-Meeting Task
Figure of Speech
AP Example
“I will be fine.”
“We understand the first tool we need to install is Jira.”
“I haven’t sent the email.”
“I’d figured I’d make sure they were looped in.”
“Why don’t you load up the file.”
“Let me see.”

Here’s why this was important: the taxonomy gave us an objective way to talk about what action items are and are not – allowing our internal team of data scientists and annotators to align on a standard definition.

We found that the vast majority of action items can be expressed as declarative and imperative sentences. In order to target those cases and provide the most value as soon as possible, we modeled the problem as binary classification on a per sentence level; each sentence had a label of 1 when it was an action item and a 0 otherwise. Binary classification allowed us to create a high-quality model that operated at low latency – something that was imperative for the near-real time environment our team worked in.

Our Product

UpdateAI’s product is the result of both our people and our process – combining the knowledge of our team with the quantifiable research supporting our approach.

And of course, it has to be mentioned that no product survives its first contact with customers. We knew that going in, and we certainly understood that would be the case for a nascent data model based on a limited data set. This wasn’t concerning, though. Personally, as a product owner for a dozen previous startups, I knew there was beauty to be found in doing things that don’t scale in order to validate your product.

This understanding helped galvanize our team, though, to be aggressive in searching for solutions. One way we did that was in creating an advanced “human in the loop” model that allowed, in real-time, our model to make predictions on action items.

This was validated by a trained human verifying each prediction behind-the-scenes. Then – with full confidence we could serve that prediction back to the customer in near real-time – we devoted a significant portion of our engineering roadmap to building out the advanced annotation console that would facilitate that process. This was critical; If the annotator deemed that a model had falsely predicted an action item, we had that person select from a drop down menu what the reason for the false positive prediction was.

This process had three main goals:

1. Increase the performance of the model

2. Deliver a delightful experience to the user

3. Harvest valuable metadata to help with future enhancements adjacent to action item detection

And to further reinforce our methods, we formally conducted recurring group review sessions with our annotators to ensure inter-team alignment. Their alignment on what action items ensured they are labeled in a systematic manner. Those high-quality labels also provided a high-quality dataset to build our model.

This process was extensive and expensive. We knew it wouldn’t scale, and that it also required the permission of our initial users. But it was worth it.

The Result: Industry-Leading Action Item Detection

UpdateAI boasts the most accurate and precise detection of action items on the market today. We’ve compared it to companies with exponentially more data points and we outperform them in all major metrics (precision, accuracy, F1, etc.).

This is all tied back to the initial problem we are solving – how to make meetings easier for Customer Success and other teams in the business of building client relationships. As we touched on earlier, back-to-back Zoom meetings with customers are an integral part of any business in 2022.

But they take time, energy and focus to tackle appropriately. And at the end of the day, so many in this field are drowning in a pile up of action items.

Our mission is to help alleviate that 6:00 p.m. drag of trying to remember what was said, by who, and in which meeting; to keep teams accountable to what they’ve promised their customers; to automatically detect action items behind the scenes, only by using a machine, and magically deliver the key details immediately after each call; and to do all of this in a way that is beautiful in its simplicity – while being directly integrated into Zoom.

UpdateAI doesn’t require a smorgasbord of distracting buttons to click, nor do we require countless post-it notes. It only requires the 3 Ps that comprise UpdateAI’s technology. That’s the power of UpdateAI – and why I am so proud of what our team has accomplished. I can’t wait to see what we accomplish next.