top of page

InsurTech Ohio Spotlight with Sai Raman

Sai Raman is the Founder and CEO at CogniSure AI, an AI platform that helps carriers automate the submission intake process using AI. Sai was interviewed by Matt Workman, Senior Client Partner at Persistent Systems.



Sai, what do you see as the fundamental problem of submission intake?

"PropertyCasualty360 estimates that the insurance industry processes around one hundred million submissions, involving brokers, wholesalers and carriers. With approximately 10 attachments per submission, that adds up to over a billion documents. The challenge with submission intake lies in the overload of non-standardized documents, primarily in PDF and Excel formats. There are numerous variations, and when it comes to loss runs, each carrier has its unique definitions.

The most significant challenge is that brokers spend considerable time preparing submissions, while carriers must manually enter and analyze these unstandardized documents when evaluating risks. This time-consuming process results in lost business opportunities and inaccurate pricing."

What do you envision for the future of data in the insurance marketplace? What is your ideal scenario for gathering risk data?

"Ideally, we should establish a universal risk-submission-level dataset. We need a platform or data layer that all stakeholders, including insured parties, brokers and carriers, can adopt. They should be able to view exposure, coverage and claim data through a unified framework. While such a framework doesn't exist yet, it represents the 'holy grail' of achieving a consistent risk dataset. Companies like Google have made attempts, but their progress remains uncertain.

Progress is being made as we've started developing a universal submission-level dataset called 'submission 360,' accessible to insured parties, brokers and underwriters. The biggest challenge has been extracting data from documents and normalizing it into a standardized format. With technology advancements, I anticipate the emergence of a universal risk-level dataset for the entire insurance value chain.

In some parts of the world, governments provide and own the dataset, which regulatory bodies could potentially replicate. While this may be less likely in the United States, it's working in other regions."

How do you propose creating efficiencies in this space, and why has this problem not been solved?

"This problem persists due to the continuously evolving nature of risk associated with individuals and businesses. Regulatory bodies are hesitant to define standards for underwriting risk, and insurance carriers consider data their proprietary asset, limiting collaboration with brokers for competitive quoting.

To create efficiencies, both brokers and carriers should transition from email submissions to digital submissions. Brokers need a comprehensive “Customer 360” view of customer data, including exposure and claim details.

The adoption of APIs between carriers and brokers is gaining momentum, though it will take time. We are moving in the right direction."

What can help overcome the obstacles to solving this problem?

"Technological advancements over the last decade have opened opportunities for insurtech companies to convert unstructured documents into structured data. This is a chance for brokers and carriers to reduce manual work through insurtech solutions. Brokers and carriers frequently handle various submission documents (e.g., applications, SOV, schedules, and loss runs) in PDF and Excel formats, which technology can transform into valuable insights within minutes.

Another significant obstacle is integrating insurtech solutions into the complex landscape of carrier systems. Carriers and brokers may be reluctant to change their processes to leverage insurtech solutions. It's essential for insurtech providers to demonstrate how their solutions can complement existing processes and systems, delivering value."

How has the emergence of Large Language Models (LLMs) changed risk evaluation?

"LLMs are exceptionally powerful tools. We conducted a capstone project at MIT to explore their potential for processing complex insurance industry documents. LLMs, trained in natural language processing on extensive texts, offer significant potential. However, they may not fully grasp intricate insurance jargon. For instance, while they can recognize a claim type like 'medical,' they might not understand that Travelers’ loss runs represent this as a two-digit code under the 'File Prefix' field.

Fine-tuning LLMs for insurance jargon demands substantial effort and time, and many insurtech companies are investing in this to create 'Insurance GPT.' Additionally, insurance carriers are cautious about sharing their enterprise data with open AI algorithms. Hence, creating a trust layer where data is masked or encrypted before using LLMs is crucial. There are many unanswered questions, but the potential of AI in this field is enormous."




InsurTech Ohio Thanks Its Presenting Partner


And Our Premier Partners
























105 views
bottom of page