What’s driving the Data Shift Right market?

AI is transforming the data stack. Now, non-technical users are empowered to access business insights more easily than ever before.

Creating and maintaining a "single source of truth" has long been the Holy Grail for data-driven organizations. Getting accurate and up-to-date insights is the crux of everyday decision-making across all business functions. However, the systems that exist today across the data stack fail to make this process easy and accessible — as a result, non-technical users are often struggling to access the right data, when they need it. Defining business metrics, much less accessing the right metric at the right time, shouldn’t be so messy and complicated, or involve an entire data team to build tailored solutions. But we’re optimistic that data democratization and AI advancements will significantly enhance business users' ability to easily access and leverage data within existing systems while freeing data teams to focus on more impactful problems.

The Data Shift Right market is on the rise. 

There is a rising crop of new analytics tools that focus on empowering non-technical stakeholders in enterprises. We’re calling this the Data Shift Right movement— the professionals across sales, operations, marketing, finance, product and other functions adjacent to technical teams will now become data-fluent and self-sufficient as AI allows users to interact with data in natural language and access the transformation layer without knowing how to code or use SQL.

At Bessemer, we’re ready to back founders building in the data shift right space. Here, we explain the common problems founders in the space are setting out to solve, why now is the time to serve this market, and the seven characteristics we think make a best-in-class software solution in the category.

The data problem most leaders can relate to

Today, when business users want to get a better understanding of some of the most relevant KPIs for them, they often work with data analytics teams to define what they need to measure, how they want a KPI to be presented (e.g. frequency, trends), and where they want it to be delivered. The analytics team will then pull out the data, or create a tailored dashboard for a specific team or professional, which could often take days or weeks (amidst many other priorities), involving back-and-forth communication. This reality leads to inefficiency, larger data teams, and lengthy periods where business users don’t have the insights they need. In addition, building these custom dashboards for data analysts is distracting them from larger, more pressing priorities.

Most data-driven organizations are all too familiar with this situation. Tooling that enables business users to become more independent would empower various parts of the organization. Additionally, it would free up data teams to focus on more sophisticated models and research, machine learning, and AI applications, and other strategic initiatives which better serve the needs of the company.

What now? Why now?

In the past few years, the modern data stack has matured significantly and became more standardized across different companies and industries. Changes in technology have opened up new opportunities for businesses to derive value from their data. Data models are now running in the background to optimize companies' core processes, simplify their service, and improve their customers' experience.

Added to this environment, the rapid development and improvement of LLMs and the AI applications built on top of analytics solutions have led to additional game-changing abilities for the “data-shift right” professional:

  • The data lineage and transformation layer is now accessible: The transformation layer is where users define the organizational metrics they would like to measure and specify the logic of how these metrics should be calculated. Historically, writing formulas and logic lay in the hands of the data team. This work required (1) strong familiarity with where data is stored, (2) how it flows in the company’s data stack, (3) the ability to write the business logic into the right systems, as well as (4) knowledge of SQL to define metrics. But with the advancements in AI, it is now easier to search in your data where certain tables and columns live, understand better the context of the data, ultimately leading to better documentation, improved access, and democratization of the data.
  • Data querying is more “natural”: In the past, users had to write long SQL queries or use Python to define the data they were looking for, then use BI tools like Tableau or Looker to build dashboards and visualize their data. While many non-technical users learned how to use these BI tools, it wasn’t necessarily intuitive for a first-time user and many business users in the organizations would often ask the analytics team to help them pull data rather than doing it themselves. With the availability of AI and LLMs that sit on top of the already established data stack, there has been a new wave of tools that are helping team members across business functions discover and consume data more intuitively, by allowing users to describe in natural language the information they are looking for, and get back raw data, charts, and even simple dashboards. Moreover, users can use natural language to ask for additional changes and edits. Beyond using natural language, users can now interact with data through existing interfaces like Slack, or a simple chatbot, which simplifies the experience even further.
At Bessemer, we believe we are quickly approaching the next wave of data accessibility and visualization companies focused on the business user.

The analytics market has historically been crowded and competitive. With that in mind, we have identified seven characteristics that we think will help tomorrow’s data shift right solutions become more successful compared to their competitors and incumbents.

Characteristics of a best-in-class data shift right platforms

  1. Quick time to value: Integrations to existing products should be easy and quick, and ideally don’t require a technical team to spend time on implementation. Once the solution is implemented, users immediately get an intuitive product that often includes pre-made dashboards and reports and easy-to-search KPIs.
  2. Out of the box: Solutions include best practices and benchmarks from leading organizations. Founding teams often arrive with relevant opinions and functional expertise, providing advice on what needs to be measured, how to measure it, and potentially tie the data to organizational processes.
  3. (Limited) customization: Solutions should allow users to make some adjustments easily — using no code / low code, by providing textual inputs, or by training the underlying LLMs on the business context. At the same time, we acknowledge there is a trade-off between the ease of use and level of customization, and therefore think adjustments should be limited.
  4. Call to action: Ideally, solutions can not only provide the analysis, but also provide an actionable insight to be taken by the team based on the data. With usage, AI/LLM solutions can “learn” the business context and go on to further recommend fine-tuned insights and next steps unique for the team.
  5. Source of truth: Tools should be accurate and considered to be the “source of truth” – saving time for other teams and keeping data consistent across the organization. To do so, it’s important to keep easy integrations with the existing data stack including BI and other tools used by data analytics teams. Keep in mind that these tools should follow the process of metric and model definition and be able to connect at the model and metric layer so the company doesn’t end up with multiple sets of metrics. For example, there shouldn’t be a scenario where the sales team and the finance team both define “total sales for the month,” but only one of those definitions is available through the company-wide metrics layer.
  6. Shareable: Findings should be easy to share and able to consume “feedback” – both among core users and teams of the product as well as other internal and external stakeholders. Some common use cases include sharing dashboards over email, creating slides and board materials, or providing QBR materials that are easy to prepare. According to Operating Advisor Solmaz Shahalizadeh, former Head of Data at Shopify, her “dream tool can also ‘learn’ from the feedback and conversations that happens outside of traditional data tools, about the data.”
  7. Vertical / focused solution: We believe that there is an advantage for companies to focus on a specific business function (e.g. product, marketing operations, sales operations, etc) or alternatively choose a specific business vertical like eCommerce. Having industry or functional context can help tailor the solutions better to their customers needs. 

Searching for that single source of truth

Finding that “single source of truth” is a perennial pursuit for most organizations.

Let’s look at a specific example with revenue data — it is typically scattered across many systems and is frequently of poor quality. Despite this, RevOps teams depend entirely on this data to make important business decisions like sales rep staffing. Yet, they often have zero access to the data themselves as they often lack the technical skills and access to run SQL queries on the data warehouse. So, a simple question such as, “Which customers have hit their usage cap for the month and should be upsold?” requires a SalesOps leader to submit a request for the data team to run a query, then wait before receiving an answer. The issue only compounds as data analyst teams are pulled in different directions with a long queue of unanswered Jira tickets from across the organization.

The result? Companies miss revenue opportunities when data is siloed in different systems, make poor decisions when using wrong or incomplete information, and waste precious data analyst resources on manual data consolidation and cleanup. To get around this, GTM teams have started manipulating data by circumventing the data warehouse, only exacerbating the problem of data silos and eroding trust in any one system of truth.

Finding that “single source of truth” is a perennial pursuit for most organizations.

There has to be an easier way to make data decisions. Hypergrowth organizations depend on every function learning and iterating their actions based on new insights, without having to depend on technical teams. That’s why emerging tools serving Data Shift Right professionals are changing the AI-analytics landscape. Take these two emergent examples currently in the Bessemer portfolio:

Preql empowers business users to create and manage their own metrics without relying on a data team. The platform allows non-technical users to connect data sources, develop metrics, and produce reports, facilitating company-wide alignment on key KPI definitions without ongoing manual updates. Preql offers some standard KPIs while allowing customizations to suit individual company needs. Furthermore, using "Preql AI" users can now create custom metrics and dimensions using natural language. This is done while leveraging the customers’ existing modern data stack in the background.

Then, there's Seam AI — a chat interface that can answer any question across all of your customer systems using natural language and AI. Seam's magic is beneath the chat interface — the platform automatically transforms and unifies siloed customer data into one centralized view, enabling teams to uncover new insights and generate powerful customer intelligence. Additionally, Seam can sync these outputs back into customer's business systems to centralize reporting and automate workflows. Seam's models, trained on thousands of queries across the most popular customer systems, produce semantically-correct SQL, ensuring queries are contextually relevant rather than just syntactically accurate.

As Bessemer continues to survey the data shift right market, we’ve seen many promising AI analytics leaders on the rise to help deliver the Holy Grail companies always seek.

How will AI analytics evolve?

As the analytics ecosystem evolves, several companies are already providing horizontal solutions that cater to general needs rather than specific business functions. These products enhance data retrieval through intuitive interfaces like chatbots or prompts. Simultaneously, traditional BI tools such as Tableau and Looker are integrating AI to simplify their interfaces and improve user experience. We anticipate these solutions will continue to grow momentum as they streamline processes and boost productivity.

Yet, a crucial question remains: who will be the primary users of these tools? (In many cases, users still need at least some technical understanding in order to use these tools.) How will these users be able to quickly generate a business-ready output? We are optimistic about the potential for specialized, vertical solutions to prevail.

Keep in mind that a big part of the data stack for data engineers and analytics will likely remain the same, but will continue to live side-by-side with the data shift right solutions that serve non-technical users. These tools have to talk to each other so they are aligned at the core, but we don’t necessarily see the business focused options ever replacing the technical tools.

Accessing data will continue to become an essential need for companies that are looking to improve and make data-driven decisions. As we chart toward a future where data is not just a tool but a catalyst for transformative business decisions, we are excited about a path whereby non-technical users are empowered to access this data themselves for relevant insights and feedback.

If you are building an analytics company serving the data shift right professional, we’d love to get in touch. Reach out to Yael Schiff ([email protected]) and Lindsey Li ([email protected]).