Session 2 -NAVIGATING THE AI FRONTIER :Large Language Models in Healthcare Delivery- InnoHEALTH 2024

This session, "Navigating the AI Frontier: Transforming Healthcare with Large Language Models," brought together top experts to explore how AI and Large Language Models (LLMs) are revolutionizing healthcare. Held at IIIT Delhi, this discussion covered AI-powered medical assistants, real-world applications, interoperability, and the future of AI in medicine. In this session, you’ll explore how artificial intelligence and large language models (LLMs) are revolutionizing healthcare workflows by enhancing efficiency, accuracy, and patient engagement. A live demo of an AI medical assistant will showcase real-time patient interaction, demonstrating the power of conversational AI in clinical settings. Leading experts in AI and healthcare will share valuable insights on the latest innovations and practical implementations. The discussion will also address key challenges such as data privacy, model reliability, and regulatory concerns. You’ll gain a deeper understanding of the vast opportunities AI presents in improving care delivery and decision-making. Finally, the session will look ahead to the future of AI in healthcare—highlighting real-time monitoring, interoperability, and the rise of open-source collaboration shaping the next generation of medical technology.

Summary

 

Navigating the AI Frontier: Large Language Models in Healthcare Delivery

Table of Contents

  1. Introduction
  2. Panel Overview and Participants
  3. Setting the Stage: AI as the New Healthcare Ally
  4. Real-Time AI Demo: A Glimpse into the Future of Patient Interaction
  5. Panel Discussion Highlights
  6. Closing Remarks and Future Pathways
  7. Conclusion
  8. Key Takeaways

Introduction

The intersection of artificial intelligence (AI) and healthcare delivery is redefining how doctors, patients, and systems interact. At the InnoHealth 2024 Conference, a captivating session titled “Navigating the AI Frontier: Large Language Models (LLMs) in Healthcare Delivery” explored the profound transformation AI is driving across medicine, research, and patient care.

Moderated by Mr. Sachin Gaur, Director of Operations at InnoVrio Curis, the panel brought together experts who shared real-world applications, research insights, and ethical considerations shaping the next chapter of healthcare AI.

Panel Overview and Participants

Moderator:

  • Mr. Sachin Gaur, Director, Operations – InnoVrio Curis

Panelists:

  • Mr. Ganesh Gopalan, Co-Founder & CEO, GyanAI
  • Dr. Tapan Sethi, Physician-Scientist & Associate Professor, IIIT Delhi
  • Dr. Shithij Agarwal, Assistant Professor, Koita Centre for Digital Health, IIT Bombay
  • Dr. Amit Raj, President, Consortium of Accredited Healthcare Organizations (CAHO)
  • Dr. Vijay Agarwal, Senior Healthcare Policy Expert & President, CAHO

Each panelist brought a unique blend of medical, technological, and entrepreneurial expertise, making the discussion both intellectually rich and deeply practical.

Setting the Stage: AI as the New Healthcare Ally

Mr. Sachin Gaur opened the session with a thought-provoking context:

“AI isn’t magic—it’s about pattern prediction, understanding language, and using it to serve human needs.”

He introduced the idea of LLMs as “medical sidekicks”, tools that can listen, reason, and assist doctors in real time. Using relatable examples, he explained how these models learn from vast datasets—text, voice, and images—to predict and respond intelligently.

Mr. Gaur also highlighted recent milestones, including Nobel Prizes awarded for breakthroughs in artificial neural networks and AlphaFold’s protein-structure modeling, underlining AI’s growing scientific legitimacy.

Real-Time AI Demo: A Glimpse into the Future of Patient Interaction

To bring AI’s potential to life, Mr. Gaur performed a live demonstration of an AI-powered medical assistant call.

The demo simulated a post-discharge patient conversation:

AI Assistant: “Hello, this is John, your AI medical assistant. Can we review your recent symptoms?”
Dr. Sethi (as patient): “I have a sore throat for about a week.”
AI: “Noted. Has it improved or worsened?”
Dr. Sethi: “It’s about the same.”
AI: “No fever or chest pain?”
Dr. Sethi: “No.”
AI: “Thank you. Monitoring consistent symptoms, no red flags. Would you like a follow-up call tomorrow?”

This demonstration illustrated how AI can extend doctor-patient engagement beyond hospital walls, potentially reducing readmission risks and improving remote monitoring.

Panel Discussion Highlights

AI for Maternal and Community Health – Mr. Ganesh Gopalan

Mr. Gopalan shared moving examples of how AI is improving maternal health in rural India.

Through his company GyanAI, he described a voice-based system that connects ASHA workers and mothers via local dialects to deliver nutrition advice, vaccination reminders, and safety alerts.

“We answered more than 30,000 maternal health questions through voice AI,” he said. “It’s not magic—it’s about bringing healthcare access to the last mile.”

He emphasized the rise of Small Language Models (SLMs) — domain-tuned, privacy-safe models that outperform general-purpose LLMs in specialized settings like hospitals.

LLMs for Clinical Insight and Research – Dr. Tapan Sethi

Dr. Sethi approached the topic from a computational biologist’s perspective, describing how AI and chain-of-thought reasoning can simulate the diagnostic reasoning of doctors.

He illustrated how prompting models with step-by-step logic improved accuracy in medical diagnosis exercises, helping medical students learn structured reasoning virtually:

“When I was trained, no one taught me the reasoning path—just the answer. With AI, we can finally standardize how clinical reasoning is learned.”

He also discussed AI’s role in extracting structured insights from medical notes, improving interoperability, and enabling AI-assisted antibiotic resistance surveillance through community-level data collection.

Evaluation, Agents, and the Future of LLMs – Dr. Shravanthan

A radiologist turned AI researcher, Dr. Shravanthan shared his journey from medical imaging to AI model optimization.

He co-developed an open-source combination of Whisper + Llama models, achieving 20% lower word error rates in medical transcription—nearly matching expensive commercial systems.

He emphasized evaluation (EVALs) as the next big challenge:

“Hospitals will soon face dozens of LLMs. Evaluation tells us which one works best for your workflow.”

He predicted that AI Agents—autonomous LLM chains capable of handling report generation and context-aware decision-making—will dominate the next wave of healthcare tools.

AI for Post-Discharge Care and Digital Clinics – Dr. Amit Raj

Dr. Raj, a cardiologist and entrepreneur, discussed how AI can fill the “care gap” between hospital discharge and home recovery.

He described using chat-based systems to follow up with patients, monitor vital signs, and alert care teams automatically.

“These assistants don’t replace empathy—they extend it. They listen when we can’t.”

Dr. Raj also envisioned virtual health companions embedded in mobile devices, guiding patients through recovery, medication schedules, and lifestyle adjustments.

Trust, Regulation, and the Ethics of Adoption – Dr. Vijay Agarwal

As President of CAHO, Dr. Agarwal offered a reality check:

“Technology has always come with both efficiency and errors. The difference now is scale.”

He highlighted the need for:

  • Ethical frameworks to manage patient data.
  • Validation systems for AI tools before clinical deployment.
  • Community trust-building through transparency and responsible communication.

Dr. Agarwal called for national-level problem statements—clearly defining challenges like consent management, financial counseling, and second opinions—so innovators can build focused, safe solutions.

Closing Remarks and Future Pathways

Mr. Sachin Gaur concluded by thanking the panel and encouraging participants to collaborate:

“You don’t need to wait for the future—build it with the people next to you.”

He urged young researchers and clinicians to intern, co-create, and exchange insights, emphasizing that India’s healthcare AI ecosystem is fertile for innovation—provided it remains human-centered and ethically aligned.

Conclusion

The session highlighted a defining truth: AI will not replace doctors—but doctors who use AI may replace those who don’t.

From rural health to ICU analytics, from data labeling to reasoning chains, the discussion showcased how language models are becoming thinking partners—helping clinicians, educators, and patients navigate the next era of care.

Key Takeaways

  • AI is shifting healthcare from documentation to dialogue.
  • LLMs and SLMs enable domain-specific, context-aware intelligence for safer applications.
  • Evaluation metrics (EVALs) are crucial for comparing model accuracy and reliability in medical settings.
  • Human oversight and ethical governance remain non-negotiable.
  • The real promise of AI lies in empowering empathy, accessibility, and continuity of care—not replacing it.

Raw Transcript

[00:00] session, a very exciting one.

[00:20] the AI frontier, large language models and healthcare delivery in within five minutes of time. I request all the speakers to please be ready.

[00:40] care. You tell your systems to a chatbot and it pre-assumes what you're going through.

[01:00] health advices, these models are the ultimate medical sidekicks. So let's dive into our next and also exciting discussion on navigating the AI frontier, large language models in healthcare delivery.

[01:20] Mr. Sachan Gaur, Director, Operations Innovator, Curis, as the moderator. Mr. Sachan Gaur is a researcher come entrepreneur in the space of mobile and internet solutions. He was in Top 10 Innovators in India, under India Innovate Growth Program 2013 by FICCI.

[01:40] of applause. Next I invite Mr. Ganesh Kapalan. He is the co-founder and CEO of Guianney AI with over 25 years of technology, leadership and marketing experience. Please put your hands together for Mr. Ganesh.

[02:00] I would like to invite Dr. Thaprathir Sethi, a physician, scientist and an associate professor of computational biology at IIIT Delhi. Next, I would like to call upon the stage Dr. Shith.

[02:20] He is an assistant professor at Coita Centre of Digital Health, double IT Bombay.

[02:40] Please put your hands together.

[03:00] I welcome Dr. Vijay Agrawal, President at Consortium of Acridated Health Care Organizations. Put your hands together for Dr. Vijay.

[03:20] Sir, what do you say?

[03:40] Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Q. After this session we have a few Unions of some people at the end of the day. Unions of people at the end of the day. Unions of people at the end of the day. Unions of people at the end of the day. Unions of people at the end of the day.

[04:00] that he's given this role to moderate disease. And I called up each one of them to understand how to structure the disease. So I started with Dr. Abhavar to get very early that you have to put an initial in the last. First thing to take care is to.

[04:20] Then he will come in along with the amitards, is it possible or not or he will take their requirements. So, I will not take long, the idea here is that in the next.

[04:40] five, six minutes. I would like to bring you along with me so that we are not missing you in this conversation. This conversation is not for us. This conversation is for you. And if you put a session with the main area...

[05:00] So that is why I made three slides. Dr. Bijai told me that normally the loud movement is visible.

[05:20] So, I am following that instruction and I am also getting a hint to other panelist who are having slides that they should not execute this. So, here you see in my first slide something like a brain in the middle and some data on this side and some outcomes of the other side.

[05:40] So, air is this piece of economy which is limiting human self. So, for example, I bring air to equal to equal level. So, how are you going to stay equal to equal level? How are you going to do that?

[06:00] Have you learned it? Sir, this is how these models learn. So what computer scientists have done, they have taken the whole work of people, all the data that are out there and they have tried to build this machine.

[06:20] It can only do one thing, but it can stay for only one thing, which is to predict what is the next word. Does it sound magical? You can be honest with me. I don't think it sounds magical. Just this ability that you could say people to people little.

[06:40] So, what happens is that it is a very large amount of time and it is not just for language. It is actually put in this piece of thing, not just for language. You can put in voice, you can put in images, you can put in

[07:00] morning we spent three hours you know playing with some of these technologies. So one of the things that we tried was that can I speak to a computer? Can I speak?

[07:20] you can do a demo today in this five, six minutes that have been allocated to Miesen. That would be an example of what is possible with this paper. But then you can ask me, you know, I am here, I am sitting here today, maybe, you know, I don't know why I am here.

[07:40] So, I would like to challenge my panelists. Okay. These are buzzwords. But so, so my show, what is this? How many of you know about the momentaries?

[08:00] How many of you know that the few mobile prizes this year have been linked to air? How many of you know this? So, this is the highest award that is given in sciences. The physics prize has gone to G-studio, Outways, and Newton because they may have

[08:20] artificial neural networks to take took inspiration from human brain to build something for computers which basically and fundamentally get them the proper way to learn from data. So like I said, clinical, simple, little, all of you are able to say then because you learned it now, data,

[08:40] They also learn, they make this technology possible for this human race today, what you call as artisanal human race. LAH are using fundamentally that piece of technology. I do not have to go into normalize it, otherwise I will use a proper process.

[09:00] So, the second prize is even more interesting. In 20, there was this company which is owned by India to get called as Deepa. This solved a standing problem in the world. So, 19 times in one Christian Atkinson won the Nobel Prize.

[09:20] So, in 1960s, a mobile price was given for somebody who tell us how to get a use.

[09:40] To know how things look at nano scale, you have to spend millions of hours. These guys, they warn about this one, where human scientists would spend a year, let us

[10:00] They could tell you by writing a program that program was for alpha 4. So, this level price in chemistry is given to four pre people, Baker and Hassadis. So, Hassadis is retired from different.

[10:20] A prize is given to someone in full use. Noble prize is such a big prize, it is only given for your entire body of work, let us say you are about to retire. So can you get a Nobel prize in full use?

[10:40] that is many people were placing debts that the mobile will be given to an algorithm will be given to them. So, this is my origin of so on. So, to be, question ourselves again why this session is important.

[11:00] If we can understand let us say the importance of electricity in our life, the importance of other basic things that we take as standard, intelligence or the air of receiving.

[11:20] is coming because of this situation. And any of you here cannot survive in your profession if you are not going to adopt this technology. So, again going back to my panelist, you know my challenge for them is that

[11:40] Don't you want an audience today and keep it to so far. And I think we are very lucky and also grateful that we have some of the power expertise in the country available on this platform today, which can give us

[12:00] So, I would like to end my context setting by making a call. So, let me try this.

[12:20] So, I have a Google form here with me and I am going to put in Dr. Thapri Sethi's number. Are you up for receiving a call Dr. Thap? Would you like to come here on the stage?

[12:40] on the diets. So I think people cannot see my screen, but would you confirm is this your number? Yes, yes. So for everybody's information, the next one minute what we are trying to do

[13:00] I am going to make a call to Dr. Thav. I will request him to pick up the call and put it on speaker. Let's hope the call comes because most demos don't work. Yeah, can you? Yes.

[13:20] Hello, this is John, your AI medical assistant. I am here to follow up on your recent treatment. It is now time to go through a few questions about your current health and symptoms.

[13:40] sure i have two minutes.

[14:00] could you please tell me when this symptom started. maybe about a week ago. noted, edginess in the throat started about a week ago. has the symptom been getting better, worse or staying the same since it started? it's the same.

[14:20] has remained the same over the past week. Have you experienced any fever or chills during this time? No. Thank you. No fever or chills reported. Have you noticed any cardiovascular symptoms such as chest pain, palpitations or shortness of breath? No. Thank you.

[14:40] No cardiovascular symptoms reported. Have you experienced any gastrointestinal symptoms such as nausea, vomiting, or diarrhea? None. Thank you. No gastrointestinal symptoms reported. Are there any additional concerns or symptoms you would like to mention? I would like to run now and wrap up the conversation.

[15:00] Thank you.

[15:20] So, this was I think on one of the goals the eye for my eye was to get a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye. So, this was the first time I saw a good eye on my eye.

[15:40] I had to route him to the end of the demo. Because after I had finished. So I was blown away. So last night I had to invite him for dinner and the group came to build the demo for me. So I got this at 1 am last night. So I thought maybe we'll use this demo.

[16:00] As an example of what allylenes can do with his blood pressure. So this use case has come from the family. He wants to follow patients post-discharge because he is a cardiologist. So he wants to care about the patients who have been discharged and he wants to know what happened to them once they left the hospital.

[16:20] questions that can be asked by an AI and an exile could be filled at his end and you know at an office people would locate in the excel sheet things are not looking right let us send an ambulance or you know call a person for help for being here right. So this is a practical example

[16:40] So now I would like to invite Dr. Sankar to come up with a few examples for you and I'm sure you guys can take million more examples and I think our angle of today with this session is to come up with these kind of possibilities for you so that you are so what is answered. So thanks for your very good being with me and now I would like to invite Dr. Sankar to come up with some examples for you.

[17:00] because maybe you can tell us more about yourself and maybe try to answer so what.

[17:20] Thank you.

[17:40] So, essentially what I like to do is to find the business value of technology and today's topic is all about the impact. Let us start with what my company does. So, I have been here with business for the last 78 years.

[18:00] So when we started this company called Manidar AI, a lot of my friends told me that what we are doing is, we just mentioned, a lot of onboard young people. And none of this AI stuff is really noticeable. To me about AI is a very important thing.

[18:20] seven years later we all see that it is now fashionable to be in here and get invited for a long and a long basis. So, but what has really changed right, fundamentally one has changed this the technology has really matured and.

[18:40] And before talking more about technology, I will start by talking a lot of the use cases that we will not intend to take. I think we had a really good example of a use case for understanding the role of patient, the people.

[19:00] For example, we have a platform on patient, right, close to consultation. But there are many, many other use cases that for example we work for. First and foremost, let me talk about something very exciting and close to my heart. So when I started this whole AI business, I never really

[19:20] imagine that this will have an impact on things like mother's mental health. I don't see the connection. So I think you are not so bad, we got actually an injury. So we are doing an indigenization experiment.

[19:40] And this is to improve the nutrition of mothers in the first what is called the 1000 day program. And it is really exciting for us to participate in this. So, really what is happening here? The overall goals of the program were simple. Take a small voyage in the pelican.

[20:00] The people here speak a particular dialect of India, not the Hindi that we speak of in Delhi on the top. Now, for this segment of mothers, can we improve the nutrition and the energy?

[20:20] goals of maternal health as well as infant. And the way we went about this program and it's an intervention. So this is a program that's part of our SPSE about I think about 2000 we have lived with on the body workers.

[20:40] bunch of Asha workers. And the purpose of this program was to hear the heart of the program was a voice walk. Similar to the voice walk that we just saw. But there will be differences. So the voice walk has two major

[21:00] would say, could we get a bonus? One would be an empowered voice call. Impaled voice call could be, let's suppose, a mother asked some questions. How do we answer those nutritional questions? These are questions about pregnancy, questions about

[21:20] with a lactation phase and questions during the infant phase. Now if this could be questions the mother asked about things like I'm getting these algicid tablets and they're really safe. It could be questions on

[21:40] And the child is a headache, what do I do? So there are a bunch of these questions. So the way we went about doing this is there were a bunch of knowledge documents, a bunch of technology, I would say,

[22:00] about metamus health that are provided to us by this NGO. Based on all these documents, we constructed a model, an AI model. And I am going to be very careful with the amount of models, and there is a reason why. And this is an AI model.

[22:20] that model the, you know, we could answer questions that our mothers have. The second and most beautiful part of this is all these questions are digitized. So let us suppose the model can answer some question, then they can be E-trained for the next set of answers to be correct.

[22:40] Then there are these other questions, and then this is on one program. In this I-1 program, you know, there are reminders about vaccination. Right? Let's suppose there's a call that goes out and says, you know, you need to take this vaccination on this particular date. Right? Now, it's why does a particular person have a vaccine?

[23:00] I have not asked a channel to complicate this whole world because any other animal on the world, this is the correct time to take it. I am at this particular day of pregnancy. You are taking it now. My child needs vaccination. This is just the correct time. Is it safe? Is this vaccination necessary?

[23:20] So there is these questions that the early members would ask and the answers are given by the board. Again, there could be some answers, maybe it is not trained for, is the board may not answer. And again it would be in trained. Or in this particular case, we had a set of health care providers.

[23:40] who immediately received the call is the walker to the counselor. So this is, and we found some really good results as part of this program where we answered more than 30,000 questions that the mothers have. So again, you know,

[24:00] I also want to hypothesize this conversation too long, so I just partnered a few things that are very unique of Nth-galactic pictures. AI is not a magic weapon. It's not going to make doctors really remember that. Listen. The second part is what's going to really happen is the power of AI is going to be

[24:20] in terms of beta collection and overall acting as a brain, or finally for the next 2 years. In a lot of very basic examples, a lot of doctors and hospitals face problems with abominable cookies.

[24:40] into any sort of diagnostic stuff. So there are basic problems how to do space in, say, appointment booking. Basic problems how to do space is in many feedback about the customer experience. All of this are programs that we currently look for as well as our own.

[25:00] So there are blocks and depending on the culture of the country, it could be wall space, it could be x-rays, the US is a lot of minus it. In India it has to be walls in our own language. And these were captured with information and it would be more than scanned.

[25:20] models that are game brought on this captured information to provide additional insights.

[25:40] Once once an hour I was in India. So we do more than 10 million automated conversations on a daily basis. As a thing we did more than 40,000 concurrent conversations. By concurrent I mean like 40,000 conversations every day.

[26:00] So, approximately every conversation is on a main reason that is about the pro of conversations. In some ways of the month we use one of the five pros of conversations. So, there is an ability to scale these conversations. There is an ability to track what happens right or what happens your life.

[26:20] And just a point that I'd like to mention because the problem with the waste proposition was many lines. And it is like you say that the future of AI, especially for the men-to-men, is what we call as SLAs. And these are actually small language models.

[26:40] The reason is that as you learn and we learn more from the medical community that there are a lot of problems with the LH and it is very dangerous. What if you get wrong results? What if they are wrong diagnostics? That bit of course. So how do you reduce the risk of all this stuff?

[27:00] So we, for example, in the US market we work with a very large program, for example, it's called Home of Care. And we have a model for health care. And this is an SLA model of the hidden...

[27:20] So the other thing that's important just to add some time on the motor program is if you can model this at least what they call this minimum of 7 million parameters. Right? All of them is adequate just about 1 million parameters and the rates are about 3 to 4 billion parameters. So it's a small model, sounds like a small model.

[27:40] Problems of security, sorts of problems, because this model can be put for the customer's now, but sorts of problems of latency, because your problem is related to the more a person do we ask an answer to a company's emergency. And how is the part of the accuracy? It is very important for everything. Because it is trained or biased towards healthcare.

[28:00] problems to a data that is relevant for the empty care industry and can be further tuned for a specific hospital or a specific geodemography. So I like to conclude here, so SLMs or small language models have a great role here.

[28:20] Thanks, Panish. I think 10 million is nearly commendable. So thanks for joining us here to wait from Sanjalob. And I think you will find, both in your left and right, lot of potential to collaborate.

[28:40] because you need this technology what you are doing, but also for the use cases that what your panelists may be offering to use. So, an exultant can use one more job, an SLM, an NLM.

[29:00] about billion parameters, these numbers do not make sense. So, just to, just a head case, how many billion parameters are in our life? So, if you go to compare the human brain with a neural network, like artificial neural network, does anybody know how many parameters to use?

[29:20] you know, vein has in order to come there. You know that brain is made of neurons. So do you know how many neurons? Anybody? So if the number of neurons he had is around 8 billions, but what Yannish talked about is parameters.

[29:40] So these neurons have tentacles which connect. So when this lining is connecting, that's what we can call as parameters. So our brain, in terms of complexity, is still 100 times more complicated than the most advanced neural network that we have.

[30:00] Because new networks are still in agreement, we are in trillion parameters when it comes to the complexes in your development. So without taking further time, I just wanted to bridge this. I would like to invite Dr. Pratikesh because he is also someone who has been working for quite some time in this space. He has told you.

[30:20] Thank you for sending me context and amazing work the nature that you are doing. So my job is basically to give you more of a researcher, practitioner's perspective. My background is in medicine. I have both sides of training in both healthcare as well as in the computational side.

[30:40] can speak with all the challenges and the opportunities that we have, all are evidence in healthcare. So we've got, if the hospital is at ease for the last 110 years, have been in the space for more than 15 years. So mostly with hospital with anti-psychines, labs, more of these centil

[31:00] So, language models are both as I mentioned before, but also as a result of the learning process, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done. So, we have to be able to get the learning process done.

[31:20] And we need more conversation tools to be able to bridge some of this gap that we have about patient-to-patient. So, that is the excitement of the area that there is an opportunity that.

[31:40] But we can bring some of the AI specifically to the language models to address some of these gaps that we have at the current, particularly at the moment, what is the country also going to do. But there is also, I would like to point out what are the challenges that we need to be aware of when we are positioning AI into kind of a

[32:00] So, we work a lot with ICU settings and say we work for 10 years in the hospital settings and a lot of data that we through with aims for example, more than 1.5 million patient hours of continuous point beta at every single second resolution of the ICU settings and for extensors.

[32:20] So, the first opportunity that comes to us is can we use language models to read these notes in some ways and then start perhaps understanding what may be the future trajectories of this patient look like, right? And then start making prediction notes future. The second opportunity is can we start labeling some of the steps.

[32:40] So, we need these mapping to exist, or you call it interoperability.

[33:00] So, that is the second opportunity for language models to actually break in some kind of interoperability. And I think that was also a bit too by a couple of hours in this discussion that we need interoperability. So, that we have a standardized way.

[33:20] of training or training data for these large level. What we find is even if we use the state of the art large level models we have been fine, but they are not accurate enough to actually transition into a delivery. Health care research, yes, may be development, but

[33:40] So, what we believe is basically language models have been trained on data from the internet. All of us know that charging P, G to P, L, all these are trained on data from the internet.

[34:00] So, we have to have a lot of data that is generated from the public. And it is also a potential for feedback to be given to these models to be tuned towards that particular setting. So, what we are transitioning into, you know, how that, and our use case, most of the important use cases that we are going to run this.

[34:20] So, what we are doing now is in a grant supported by the Timothy Challenge for London, we basically are looking at community level data collection for antibiotic resistance to understand what is the level of self-treatment using antibiotics on the drug. That is a big challenge.

[34:40] We also understand from our own research that as life involves will not air yet to accurately capture it.

[35:00] So what we are trying to build now is something called reinforcement learning unit feedback that a separate setting should be able to fine tune the language model towards that kind of scenario. If I am a small clinic, I should be able to fine tune the language model which can give you large language models, small language models to my particular unit.

[35:20] So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it. So, we have to have a lot of information about how to use it.

[35:40] So, first of all we do not believe, I do not believe that AI is going to replace doctors. Doctors are always going to have a sign of power. They will sign off whether this ideology report is correct or not, whether the prescription that I am giving you or the doctor is giving you, you should make or not.

[36:00] So, how can we build this relationship? So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship. So, we have to have a lot of information about how to build this relationship.

[36:20] So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system. So, we have to have a lot of information about the system.

[36:40] There is no such thing as a component of the researcher. I have a slight debt which is not of its own, but it has a lot of animations that are just type of data.

[37:00] have more information there. So, some should be done by the public and like Dr. Biket, I was also the head of Dr. Biket and then very soon realized that I was not good enough to practice medicine.

[37:20] transition to a B.C. and then stomach upon the milk, some sulphur and all that and not sufficient to play. And then I was always looking for a doctor who understands how to have an infant and second back to the lab. And very recently actually I have been on replication to just this month on this actually.

[37:40] They're pretty sure they're in the history of education. And the implementations of these centers of excellence of the I. They're establishing these centers of excellence of the I across the country in every particular system of the cities. So we need the project management unit. So spend half of what I do in the lease home for the I.

[38:00] It's a 1,000-hour project, so 3,130 hours each centimeter. Now it comes to me. So again, giving your researchers' perspective, I caught on working through some of the three problems that I had.

[38:20] In my short duration as an assistant professor of biology, I work on these three public spaces with my various methods. So we have a number of large language models and something called is chain of heart. So what is chain of heart? So when you start thinking.

[38:40] So, this is generally how we think, right. This is generally how we think. And what do we put into a large language model? It is called a prompt, right. We understand these certain large. So, we have to understand these certain large. So, we have to understand these certain large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large. So, we have to understand these large.

[39:00] So, for standard compass you actually give an example and an answer that you ask a similar question. This is what you do, a standard pumping mechanism. So, you say, so this is an example, Wouton has five aniscoils, he buys three more canes of the discords. Each can of the aniscoils, each can is clean and clean.

[39:20] Also how many fans was this year? We also tell the answer. This is the year end. We ask a very similar question, very similar question. The traffic area is 23 hours, between most 20 to main lunch and work 6 to make how many hours would they have? We answer that initially when GPT 0.5 had come, we could not go this.

[39:40] answer in the end. So there's a very famous point to actually make. So a guy actually asked a community 3.5 million protons, what is food mystery? Community 3.5 million per cent is food. And he said, what my wife says is it's fight. So community 3.5 is one. Your right is back.

[40:00] So, the vice is always select, right. So, the concept that we are trying to touch upon here is that there is no notion of correctness

[40:20] So, sometimes models are trained on determining the next most appropriately probable proposal. The notion of correctness does not exist. So, it is very, very crucial to understand that we should not confuse fluency with intelligence.

[40:40] But if you ask the same question and you tell it to break it down, you get the answer by breaking it down. You say, if there are other as 5 quads, who cancels 3 quads, mistakes quads. So, 5 plus 6 is 11. So, the answer is 11. So, you have now learned the model how to think. And you ask a very, very simple question.

[41:00] similar question. The same question which we do not have to answer, now it is even to ask this question correctly because we have told it step by step how to do this. So what I do now is try and do this in healthcare. I really want to understand how can I determine gene of thought driven content and utilize it.

[41:20] You can reach a final differential diagnosis with the disease name. You can always reach a good answer, but you also need to know why. That's an exclusivity process. So how do we measure it out? So we developed a chain of code-complained mechanism where we actually gave five medical questions and then explanations.

[41:40] So, when a patient walks into your door and there are separate doctors in this article, so what happens? You look at the patient and you ask, you ask, and you get a set of symptoms. Once you get a set of symptoms, you make a decision like this and you head the big page.

[42:00] Then you do some examination. When you do some examination, you believe a few differential diagnosis and you add a few differential diagnosis. And then, ultimately, you do some investigations and you reach a final diagnosis and you start thinking. So this is what we wanted to do. So we end by discussion there. And 59 years of

[42:20] woman who is present with urgent care with complaints of severe abdominal pain for the past two hours. She complains of lung pain and a lack of emotional vomiting inside the lungs. Aim has more relation to food, past marriage is significantly lower. Other guy at the age of 60 is some sort of abdominal cancer. So family history is also there.

[42:40] temperature is normal, the respiratory rate is normal, the pulse is normal, the blood pressure is normal, disease, reformation, unremarkable. Although the CT scan shows a calcified mass mirror or bladder. Do you think as soon as you see a calcified mass mirror or bladder, what is the question? College schools, very straightforward. The question is which of the following

[43:00] So, this yellow arc that you see right now, this is actually what is already there in the question. And what is the video clip? Video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from video clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip from clip

[43:20] So the main group is the reason given by the model. Since you have a 59 year old woman with prostate, cuticle, and this is how much of it is present. In the case of the main group, the group also. Being a seeker, no relation would be the group also. Asian is overweight, which is a risk factor for the person. This is very interesting. I give my last one risk factors.

[43:40] But it's stopping the reasoning out, based on the chain-up for content that I've given. Okay, it should go away to this potential for falsehoods. It should have not shared non-re issues with the reason for falsehoods. Well-paid impact could be into the falsehoods. Vibals are normal, which indicates no immediate threat to life. Very interesting. I did not ask it. It is now thinking.

[44:00] The patient's physical examination is undemountable, no evidence of underexploation, cancer-admast which is the case for a stroke. Since the patient has a history of requiring adorability to the colon and eneposis, does this acute cholecystitis? So the diagnosis to exclude because your family's key of cancer is going up to cancer.

[44:20] So you see, you are step by step reaching final answer. Now are you going to use this in clinics? Absolutely not. Because we cannot do that in clinics. But where we are trying to get at this is, when I was trained as a doctor, nobody thought me chain of thought reaching superficial diagnosis.

[44:40] So, we have to actually follow a doctor on the round standing, the tallest guy used to be the furthest I was actually in the patient and doctor is observing the patient. But I have no observation, but there is no standardize way this is taught in the medical space and this is the tool that we are trying to develop with management of health sciences that we have this multicultimization.

[45:00] So, students get to learn this virtually as to how to use a procedure with a general use. So, this is one of the discuss. Second is extracting information. We talked about, extracting about a lot about this extensively. So, we actually regard this as putting out an amount of hospitalization. There is a lot of medical targeting in the system.

[45:20] So, we could just prompt it using lot of language numbers. We use Gemini here and we could extract all the information that we wanted into the from deep distress, an anonymous distress numbers. And one final thing that I would just like to talk about, say I work with this NGO called Enver Icy. So, what Enver Icy does is that it takes

[45:40] So, we have a lot of problems with the cell cell cell cell cell cell cell cell cell cell. So, we have a lot of problems with the cell cell cell cell cell cell cell cell cell cell. So, we have a lot of problems with the cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell cell

[46:00] So now we have the set protocols. Now the nurse can ask people's questions.

[46:20] So, you can see that the system is not required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this. So, you can see that the system is required to do this.

[46:40] questions it's not an individual question. So these are my two selves. Thank you. So next is Sravantan. I had only met him or spoken to him a couple of days ago. He's the youngest in this

[47:00] Q. So I am going to insist to monitor to keep it very short and if you could explain very shortly let's say in a minute why did you...

[47:20] leave your job to do the job. First of all, maybe you can tell what was your job before. And second thing, what is the importance of evaluation? So if you can answer these two. Okay, first of all, thank you for having me here. I am probably the youngest in the panel.

[47:40] And I was a radiologist and my training at the Influgale League and subsequently my senior residency there. And I am transitioning currently to full-time AI research at the Toyota Center for Digital Health at Kachau University. So I am a very big...

[48:00] proponent of democratization of knowledge as well as technology. And I had started working with computation problems and I joined aims daily from a thesis, but gradually I shifted to working with large language models.

[48:20] I will tell you a question via transition, but the major difference which I found in the Indian setup between radiologists who are being residency here and the US was that they use predominantly dictation when they type their reports, but in India we

[48:40] Usually we have so much money in our residency programs to pay for nutrition softwares in the government setup. So we are kind of, we just take to timing, which is very inhibition compared to speech. Serious thinking is why not try out how we can do these things better.

[49:00] And then at that time, VISPR had come out a year and a half back and it was open sourced. So what we did was that we tried out VISPR and it was doing very average with medical terminologies. And just in a week, we put...

[49:20] They care that they are open sourcing their large language model for Rama. So we did a combination of Whisper and Rama and found out that there was almost a 20% decrease in the word error rate. So word error rate essentially helps us measure the errors which is coming in the output.

[49:40] Finally, the manager audit came down to almost 0% after company engineering, which was one of the best, in fact, state-of-the-art performances among an equally good as the software which are available in the US, which charges you in like thousands of dollars.

[50:00] So, we already presented in Arizona and it was very well appreciated. And this is where I realized that there is tremendous potential of AI. And Progulsor's AI, in my opinion, is the future, wherein each of us has access to these large language models.

[50:20] build these tools based on our own use cases. And for, so I'm not a very medical background, right? I was an MBPist and an MD. I just persevered with the hobby, but I wanted to delve more into the technical parts so that I understand a little bit better how these models work.

[50:40] so that we can make better models for the use cases in length care and that about the reasons why I am shifting for doing full-time air research to Azhoka. Coming to E-als, so the reason why evaluation is important is that those large language models which we use

[51:00] nowadays, do not always give the output which you want. In fact, if you see in production today, most of the large language models which we have, they are kind of like most of them.

[51:20] are like a chatbot interface, but the future of LLMs lies in a concept known as report generation, wherein, so right now basically a human person has to stay there, copy from the outputs of the LLM and then make the report a PPT or a PDF in which you are very want.

[51:40] But right now, better and better tools are coming up which can actually create that PDN and PPT in a very well-aligned manner and give it to you. So that's the future of LLMs and for figuring out how good each LLM or a DLS is in the work which it is assigned to, we need to evaluate.

[52:00] So, imbalance is essentially evaluation matrix which define how good or how bad an aeolum is working. Why it is more important on healthcare is that at the end of the day when you are paying for a software, you need to understand whether it is going to work in your hospital or in your family.

[52:20] And whether it is actually good, right? You will have 10 or even sometimes like hundreds of elements come into use for the use in your setup. So how do you decide which one you should use? This is the role of events. And why there are generalized events available in healthier setup?

[52:40] So, the results are there, a few in number, but specific results for specific use cases are still being developed and that is something which I also worked at the Las Guttal lab in Harvard Medical School, Dr. Pradamp Las Guttalar, wherein we are trying to develop opal meditatories for

[53:00] essentially figuring out how good or bad the report generation models are in LLMs. Before I hand it over to you, I just want to like bring you guys with a future of LLMs in the next one year which I feel is going to be different.

[53:20] defined by something known as agents and a lot of work is being done in this regards. But why essentially agents are important is because it makes your tendency productivity which you get by using LLMs to almost

[53:40] 50x or even 100x. And today if I have to use, so I am working with a couple of regularly smart people to develop something in repotization and ideology. What I have to do right now is I have to go to my app, open that particular section into the patient details, the clinical history and the findings. And finally I get an output.

[54:00] But with agents, we essentially have a series of elements which are specialized in its own task and we just give a prompt. I would just say, let's say, hey, Jarvis, I see a patient having the features of male tuberculosis and I think almost all of it is.

[54:20] All the features are there in this test area of this patient John Doe. This expert agent's work in the background and when I go back to create my report, it already creates a preliminary draft to ready for me just for signing up. So I think as a future we are moving to where the lot will do in evaluation of agents as well.

[54:40] development, but it is very exciting for the future. Thank you.

[55:00] I mean I had 15 minutes, I still have 2 super experts to go. So I would request for them to be brief so that I can at least do one panel question, if not me. And I know Dr. Ahmed has 9 slides.

[55:20] So I would like to request them to be very clear, but then, you know, we know already that in the people who are very tech or research camp, they were already doctors. So they bring their sensibility and I think why I was asking so one that to kind of answer this question.

[55:40] because I think working Dr. Tull's speech, the ending work was like the ship is in advertising, it applies to all of this. And I have spoken enough a number of times with Dr. Tull that he has spoken about immunization. And I think in my mind, it is

[56:00] is very important, maybe we will probably have a discussion at a later time about it because we do not have opportunity today, but maybe without the regime to have the how many people can tell us from trenches you know what is your requirement and if you need to keep at peace. Yeah, and this is happening.

[56:20] If you do, everybody, thank you, Saju. Well, again, I reiterate Dr. B. K. Singh. He's a mentor, a role-

[56:40] And he said, you visited Rajput in my room in the visiting phleotic centre. And he was like, wow, I mean, this is the impact. So when each other find out which is a pure unknown cardiologist and a practicing cardiologist, there's other information as well. And in my journey of both entrepreneurship

[57:00] clinical practice. I've learned from the, I mean rather our business school, this is kind of use digital approach in your veterinary practice to sustain, scale, give a value, to provide more trust to the patients and to also

[57:20] This is what I have learned and of these years for the last five years I have been using digital approaches in my ecosystem, in creating AI-based rural digital clinics in the last

[57:40] patients are getting discharged, I mean, when usual practice exacerbations may go out, they're fine. Nobody thinks about it. So how, you know, coming to my topic of navigating the AI effort, this is one area where I need to bring about value to my discharged patients, how we communicate with them.

[58:00] So, I need is an assistant, I need is a companion, I need is a like what might everyone say, you know some of whom such as a doctor, you can help at least the patients do find difficulty in somebody you know hearing they were not aware.

[58:20] about those smallest, reviewed, stupid, unspoopy problems and the elements of AI solutions are the best listeners. We have had a condition of our own, general practitioner of our physician of it. So we have no patient enough and no dizziness.

[58:40] such a way that it is giving more value to what a doctor feels like. So, particularly, I mean, whereas how we did, I mean, immunization is a better thing. So, I'm in front of all the unit of audience as the next guy around, agents, that is our action models. So, we have seen it, we have made all of this, what it is and what could we do?

[59:00] will be blue out there. So in a hospital setting. So you know both are complimentary for different purposes but with amazing use cases. This is a picture which I had dreamed. You know I gave it you know to my insensit in my hospital in my IC.

[59:20] There is a separate path on my distal health where I have data flowing in from my normal distal health units or from my D.C.E. D.P.E. S.P.O. You know, three parameters I have. And I dream about this digital plane that is a digital plane of a hospital. And then, you know, digital is a thing that is not a thing.

[59:40] is a account of services. So, how do you feel about that? Tell me what I'm talking about. To me, it's just simple GPP-based models designed to understand and generate human-led languages and that their applications would be on use cases for me to go to a medical documentation and for a strange application.

[01:00:00] documentation, help your healthcare professionals, summarize patient's history, your transcribed conversations and generate a concise review with accuracy and speed to the senior consultant. So it is an add-on to my UWM, UWM, UWM rather iced

[01:00:20] asking the same question, the machine is asking the same question. And everyone really did it now rather than the machine and language. So I mean it went through, I mean like a result of my coming to me with my own what do you call it, you know, English slang.

[01:00:40] Was it more human? I mean, what would happen? Okay, I mean, just to wrap up, you started with the first mexico center in experience at the end of hospital. Where I'm using mexico in my ecosystem in a prolonged stay. I see you, I see you, it's giving fantastically normal words.

[01:01:00] It is really at the top what I am hearing means and we are different whatever. So I am going to evolve that with our most solution if you have a good platform within these tech, collaborating with you know techies around if somebody is here, hearing the insights of the tech and what I need, reaching the need.

[01:01:20] and the solutions that is provided for our solution. And then, clinical decisions are support. And the system is in position waiting, but again, you know, it can probably be a substitute for a clinical judgment by the physician. This is an important element. Can break down complex medical decisions.

[01:01:40] terms and traditions into simple language, may implication understand their simplest understandable language form. In which are run up where they are, you know, happy with. It may also, you know, help with the research and mutation. And the most important here is the vertical health assessment.

[01:02:00] I mean I am going to talk about this as my next start-up. I mean already you know you know the third one I mean working out this is this is going to really help me. Once many assisting them, I talk to them just on physician or I just use Christmas and this can can interact with patients with preliminary information, placing information and answer certain questions.

[01:02:20] I mean, expect answers from the patients and arrive at the differential at ease to ascertain the criticality. As a cardiologist, I want to see a patient does not go into failure. Post-op. I mean, I mean, few doctors around. That's my primary concern. Oxygen does not go down. So we are asking certain questions. This can help me.

[01:02:40] So this is a very valuable and we need to manage it. And this virtual assistant is going to be a patient's analysis on their form, on their air, on their lap or even on the lap. I mean what technology is enhancing the trust between the patients and the doctor.

[01:03:00] We are working with last action models where from aliens we would move on to action. So workflow operation, robotics, then medication management, dispensing, remove patient monitoring, personalized health information.

[01:03:20] So, you know, I can tell you these. So, these are, you know, normalist use cases which I feel as a generation I need these. You know, tech, who help my patients get the best out of technology and as well as out of it.

[01:03:40] We have to share Dr. Amit's life. He has a lot more to offer. I mean, that's a lost moment. It's just our time. We have to move. So I think Dr. Amit could have easily placed in the tech camp, not in the clinical camp, I think my fellow panelists could agree.

[01:04:00] Right? So maybe not. I am practicing. I just need to know that yes, you know, both the tech and the clinical outcome needs to be, you know, complementing each other. So now I need to go to Dr. Petarb Right? So maybe not. I am practicing. I just need to know that yes, you know, both the tech and the clinical outcome needs to be, you know, complementing each other. So now I need to go to Dr. Petarb And ask him, Sir, does any of these impress you?

[01:04:20] Any change of mind?

[01:04:40] at healthcare as a paradox where in the last 100 years the amount of the medical advancements that have taken should have made the medical professionals very proud of where they are. The age at the time of independence was just about 30 years.

[01:05:00] So, it is now 75 years and some of us have even defied that average age. See the issue is that in spite of all that where you go to any place and most common kind of a thing that a medical professional has to hear.

[01:05:20] is was that you guys are rooting, it has become an emersion, there is no ethics into this medical profession, so on and so forth. Why is it happening and it is really painful? It is very painful when

[01:05:40] Patiently you try to understand that a thing like in my own lifetime people used to go to United States for getting a bypass surgery. Because the bypass surgery is being done in any bocasil town, you go to any place and it is being done. And it is being done at a cost of about 3000 to 5000 US dollars.

[01:06:00] is being done in United States at the cost of about 60 to 80,000 US hours and still the medical professionals are daring the kind of sentences that I just wrote. Now the reason for that is that you know technology has been coming you know right from the time accident.

[01:06:20] surgery, all those things came and all those technologies were lapped up by the clinicians as well as by the hospitals because they were bringing revenue to them. It was within revenue and it was within the hospital.

[01:06:40] It was of course benefiting her patient, but whenever you bring this kind of a technology, it was also bringing a lot of errors along with it. Because it was now shifting from it being just a doctor and a patient relationship to an in-work and that's where we as doctors.

[01:07:00] failed to grasp this, I am saying I have been looking at this healthcare ecosystem under that kind of an anger. Healthcare is a highly fragmented delivery ecosystem in our country and maybe even around the world. And what I am seeing is now that

[01:07:20] So, there is no time, you know, under solution we are going to come out with solutions where at least some problems, statements are first of all accepted and towards that problem.

[01:07:40] statements we want to look at the solutions. Now where could be 10 solutions that can be 20 solutions then they can be evaluated. But we need to make some problem statements. So for example I will say that one of the I will say good disruptive thing that came about recently was that we also.

[01:08:00] realize that patients are not in the ICU and they go into the lab, they go into the step-down. A nurse is coming every 4 hours or 6 hours to take the vital signs. And you realize that the patient may have a problem just 15 minutes after she has taken the vital signs.

[01:08:20] There could be a method of either sign monitoring on a continuous basis and people have come out now with solutions and there are, therefore there are solutions, there are companies. I will say a significant amount of progress is being made.

[01:08:40] So, we have to understand that this at least the hospitals should start understanding that there is a solution to address this particular problem statement. Now similarly for example, today what patients have a today actually speaking lot of.

[01:09:00] So, if you look at the information about these DHC, that is what they do not require any inputs from the healthcare because if you ask even the GPT or the cloud AI that all right what can LLM do to address in the healthcare system I tell you it will give you a beautiful risk it will provide. And that is where they get started also.

[01:09:20] So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling. So, it is a very important thing to know about financial counseling.

[01:09:40] will cause you lakhs of the keys. But I feel today with this AI there is a data because there are number of pairs the patient could be belonging to different kind of a pairs mechanism. There are different categories of admissions in the hospital then he has different of course, inputs in relation.

[01:10:00] to his physical condition. So, can we try to create at least a draft financial counseling document which can be given to the patient. Similarly, I will say that second opinion is not what are our clinical decision support systems and all what we have been talking is.

[01:10:20] We need to create a system where the community starts to trust you. The community starts to trust the healthcare ecosystem. So maybe some organizations and youth can come out and, alright, you have a problem. Based on all this, there could be a problem.

[01:10:40] platform for a second opinion. It is just summary. Now all these are great examples of LLMs where they can be used. And so the point that I'm just trying to make is that I think there needs to be a dense organization. And then for example, there will be very

[01:11:00] people who are now trying to create for example, consent forms. You know, a patient who is undergoing bypass surgery or a knee replacement or various of that, everybody needs a proper consent, informed consent to where we can work on a consent form.

[01:11:20] which is very elaborate and can be in many languages. It can be in an artificial format by the AI. So, the areas are huge. So, that is where for example, I bring you greetings from my organization that is AHO which is the consortium of accredited healthcare organizations.

[01:11:40] We have started a healthcare directory. We are inviting people that you please put whatever solutions you are creating, can we put it in that directory so that at least the end users and you can start getting connected. And we let us start to create a broader community.

[01:12:00] problem statement and say that under which there are 10 solutions, there are 20 solutions and then after that I think what should we talk about evaluation that is going to the peak. Today there are hundreds and thousands of apps, mobile apps which are available for health care. The issue is how do you validate, how do I create

[01:12:20] We need to have a large scale adoption to let the LN be also changed into a large scale adoption to change the spectrum of healthcare delivery into safer healthcare delivery and let's restore the trust from the community with the role.

[01:12:40] I think, sir, nobody could have better summarized than you all said and you could bring this clarity of thought. We are now almost at the time slot because I could.

[01:13:00] very well spent two to three hours with him guys, brainstorming and maybe brilliant some kind of convergence on various topics. But I think each one of the panelists there will bring some deep expertise. I think the most

[01:13:20] important takeaway for me as a layman is that how can we take the wider audience there together? Some of you have made those choices excellently or mainly. It would be really enriching for us to learn if somebody has to be in a cockpit out.

[01:13:40] What are the choices you make like initiative or the delightful damage or the delightful effort or the delightful vision. Even I think a lot of people here I feel even in this conference they get left behind because there is a lack of vocabulary in the way.

[01:14:00] we can articulate our business. But at the same time we have a T-brain right outside. So I am not able to do just this in my road today, I am sorry for that. Because I do not bring out probably the nuggets from them because of the scarcity of time.

[01:14:20] We will take, we can do this session here and request my leads here to follow the rituals. But I would really encourage you that you should not teach them. This is your opportunity that you have to catch them.

[01:14:40] They have to extract all the knowledge because I learned from Shithit for example, he was going to Jammu and somebody called me from I am Jammu that they were looking for internships. Side note that there is this guy coming, he has 1,000 peros to give, please catch hold of him.

[01:15:00] He will give you all the internships. So I can tell you in front of him that if there is a possibility for him to facilitate a role for you for being an internship in the Center of Excellence for the AI. And if you don't ask for it, you will not get it. So please use this platform.

[01:15:20] to connect with these experts. Like the way Dr. Bajai said about directing, we were just discussing certain in the workshop when Dr. Tamriy K was asking which model should I use? So is there a list of models of which? And I think maybe the last was to vanish. You have a lot of takers there.

[01:15:40] So I don't know how people can approach journey and build next generation of solutions for India. So thank you very much. My deepest gratitude to all of you. Thanks for giving me your time.

[01:16:00] We have just explored the AI frontier and needless to say but the future indeed looks very healthy. We will be quickly moving towards the felicitation and I request Mr. Satyar to come up.

[01:16:20] to please come forward and felicitate Mr Ganesh Gopal.

[01:16:40] Please put your hands together. Come on. I request Mr Sathya Ngaon to finish the date.

[01:17:00] I know we are not so low on energy right now. There is still residual energy left. Next I request Mr. Satchin to facilitate Dr. Shidik Jada.

[01:17:20] I request Mr Satchin to resuscitate Dr S

[01:17:40] I request Mr Sargent to felicitate Dr Amit Raj.

[01:18:00] Please put your hands together.

[01:18:20] At last, I request Dr. Vijay Agarwalj please come forward and fensitate Mr. Sachin Gore.

[01:18:40] Please put your hands together. I request all

[01:19:00] the speakers to please come forward for the group photo.

[01:19:20] Again, it's a bullshit.