Skip to content

Unleashing the Power of Conversation: An Analytics Revolution

The Senior Vice President of Strategic Development at a fortune 500 company, was in a bind. She urgently needed to understand why sales had fallen sharply in two sectors for an upcoming strategy meeting.

The task to source this information fell to Mike, the company's top analyst, who was already swamped with requests. The company’s new analytics tool, complex but powerful, could only be efficiently operated by a few like Mike. As the VP waited for the report, she reflected on the irony - they had advanced technology to streamline operations, yet the bottleneck was the human element. Mike, skilled but overstretched, was struggling to keep up with the demand.

Does this situation sound familiar? It's a common dilemma that the analytics industry has been grappling with for years. That's why we were thrilled to introduce inmydata copilot late last year. Now, during the meeting, the VP can simply ask her phone and receive an instant, comprehensive response with charts that shed light on the factors behind the underperforming sectors.

So, how did our relatively small company in Scotland manage to steal a march on the rest of the analytics industry and be the first to deliver conversational analytics? The answer lies in our pre-existing platform that was perfectly crafted to tackle the obstacles of providing conversational analytics by integrating with large language models (LLMs). 

Speed
To deliver a conversational experience, it is crucial for response times to be fast, ideally within 30 seconds. Considering a significant portion of that time is taken by the language model processing the question, it becomes imperative for the retrieval of any necessary data to be exceptionally fast. Fortunately, inmydata was designed from the outset to provide sub-second responses, even when summarizing millions of rows of data.

Query Complexity
While large language models (LLMs) are capable of generating simple SQL statements for data retrieval, they tend to be less reliable when it comes to complex SQL. For example, if your data retrieval involves calculations or aggregates, the SQL generated by LLMs is often flawed or incorrect. However, inmydata, was designed to allow business users to easily create their own data views. This functionality allows us to provide the LLM with pre-summarized datasets that include all necessary aggregation and calculations. Consequently, the LLM is only ever tasked with generating simple SQL statements.

Data context
Data is often stored with little or no context, cryptic table and column names, and without business definitions that give it meaning. This makes it impossible for a LLM to generate responses that truly make sense of the data. However, because inmydata was designed to empower business users to create their own data views, it includes a meta data layer that perfectly delivers the necessary context for the LLM to effectively answer queries.

Validation
Language is often ambiguous and open to interpretation. For instance, if you ask "Give me the top 10 stores based on sales in the last week", that could mean sales in the last financial week, or sales in the last 7 days. The LLM will make an assumption about your intended meaning, so it's crucial you are aware of that assumption when you get a response. Thankfully, inmydata always presents a chart that offers a clear and concise representation of the data used to generate your response. This guarantees you have a clear grasp of the exact data used to generate your response, allowing you to quickly identify and address any possible misinterpretations.

Privacy
Most organisations are understandably nervous about exposing their data to organisations that build LLM's, so we need to be 100% sure data security is never breached. At the beginning of the conversation, we securely send a small sample of your data to a private, stateless session with the LLM. This serves only to give the LLM an understanding of the structure of the data, and is never stored beyond the life of the conversation, shared or used for training. From that point on, the LLM is solely responsible for providing code, which we execute locally without ever transferring any of your data outside of our secure platform.

Visit our website to find out more about inmydata. If you're interested in a quick demo, simply submit your email address below and we'll be in touch.