Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (2024)

Spencer Dorn

Spencer Dorn is an Influencer

Vice Chair & Professor of Medicine, UNC

  • Report this post

AI agents are entering healthcare. Will they live up to the hype?Agents move beyond chatbot functionality (i.e., simple responses) to automate multi-step tasks by making decisions and taking actions without humans in the loop.As this article explains, AI agents are proliferating across consumer services. For example, Klarna, an online payment processor, recently reported that its OpenAI-powered agents now handle two-thirds of customer chats – equivalent to the work of 700 full-time human employees.Agents are moving into healthcare, too. A recent Elion briefing mapped the burgeoning wave of companies racing to deliver AI agents to automate healthcare call centers by summarizing calls, supporting the next best actions, and surfacing relevant information. It’s enticing to imagine how agents can automate and streamline multi-step processes, such as processing referrals, scheduling appointments, and completing pre-visit registration. Agents may patch staffing shortages, reduce costs, and cut waiting times. But what are the potential costs?For one, can we trust the agents to complete multistep tasks? Consider an appointment scheduling agent that performs any individual step with 96% reliability. if the process includes 5 steps, reliability will drop to 82% (.96*.96*.96*.96*.96). Would anyone with decent standards accept that?Second, beyond reliability, how will agents affect consumer/patient experiences? Consider supermarket self-checkout, ordering dinner via QR code, trying to process a return using a chatbot, etc. In some situations, self-service works well. In others, it is quite maddening. Like all technology, the question is not whether AI agents are good or bad but when to use them.https://lnkd.in/gb6ZAxDs

AI agents are having a 'ChatGPT moment' as investors look for what's next after chatbots cnbc.com

39

21 Comments

Like Comment

Spencer Dorn

Vice Chair & Professor of Medicine, UNC

1d

  • Report this comment

Related thoughts: https://x.com/random_walker/status/1778770599340290103 Elion briefing: https://mailchi.mp/elion/revenue-cycle-automation-6738485?e=6b2729dfdb

Like Reply

1Reaction 2Reactions

Joshua Liu

Co-founder/CEO, SeamlessMD | enabling health systems to digitize patient care journeys with automated reminders, education and symptom monitoring - leading to lower LOS, readmissions, and costs | Physician entrepreneur

1d

  • Report this comment

In your example: What's the reliability of human agents today for each of those 5 steps? What's the cost of the human agents relative to the AI?If the AI agent is just as good as the humans (even at 82%), it will be a no-brainer switch, and it will only go up from 82%.

Like Reply

1Reaction

Jason Taylor

Healthcare Junkie - Making Digital Health Easier! - Transformational Leader - 30+ years "wiser" -Trusted Advisor to C-Suite

1d

  • Report this comment

We'll know pretty quickly, because the no-show and pre-auth rates will be an early symptom of whether appointments are being properly booked.Personally I think it's a tragic mistake to 'play' with AI by using it on patient experience. Healthcare is scary and confusing, and taking humans out will have a cost.

Like Reply

1Reaction

AiInfox

21h

  • Report this comment

Exciting developments in healthcare automation! Our advanced chatbot technology is engineered to seamlessly integrate into healthcare call centres, enhancing efficiency and patient care. Let's discuss how our solutions can address your specific needs and concerns outlined in this insightful article.

Like Reply

1Reaction

Patrick Conheady

Expert systems for the way decisions are really made

13h

  • Report this comment

The term "agent" is a bit hazy. Sometimes it is a chatbot or simply a computer system with a chatbot interface. Other times it is a background process or "daemon" e.g. a workflow triggered by an incoming email. Pretty much any API can now be called an "agent".But people hear the term "agent" and think it's an AGI doppelganger that can do things on their behalf.Many years ago, before the web and the client-server paradigm became dominant, it was thought that the way the internet would work is that software on a person's TV would generate an object to represent a query, and that object would travel from server to serve collecting information, and then return to the person's TV to present its results. That was referred to as an "agent" also.How it turned out is pretty much the opposite. The platform operator (say Google for consumers or Epic for doctors) runs software on your hardware which wires it into an ecosystem of services that collate information (from your hardware and other sources) and sometimes show things on a screen to keep you happy. So the significance of the word "agent" is completely different. Rather than you having agents, your hardware is controlled by agents of others.

Like Reply

1Reaction

Marc W. Haddle

1d

  • Report this comment

AI in healthcare is going to be centered on the "next best action", possibly with "good/better/best" options. I imagine this will be for administrative/revenue cycle tasks long before clinical decisioning, if at all. Just because something "can" be automated, doesn't mean it should be...

Like Reply

1Reaction

Faisal Cheema

Physician @ Kaiser Permanente | Hematology, Oncology | Lead NCAL malignant hematology clinical trials

1d

  • Report this comment

Thanks for sharing. Ai chatbots and agents can be transformative in many fields but need A LOT of training with human in the loop prior to independent deployment. It is one thing to write a code and execute a multistep process as a software engineer or summarize text and respond to customers but healthcare is a totally different ball game. A wrong appointment for a wrong procedure will have dire consequences. I still believe that Ai agents may be helpful with the initial triage but the final implementation needs to be by the human in the loop.

Like Reply

1Reaction

Dustin Cotliar, MD MPH

Healthcare AI and HealthTech | Consultant | Future Founder | Empowering Patients & Optimizing Care

1d

  • Report this comment

It’s interesting that different types of agents will be able to interact with each other to accomplish various tasks. My personal health bot/agent could interact with the CVS agent when I’m prescribed a new medication. I was watching a lecture by Andrew Yang about the impact of agents and he mentioned something interesting that I didn’t really understand but from what I could glean agents are really good at reducing errors because they can continuously loop back to cross check for mistakes.

Like Reply

1Reaction

Steve Omans

Executive Healthcare Leadership | Digital Transformation | SaaS | Strategy | AWS | Azure | Salesforce | Adobe | Relational Databases | AI ChatGPT 4 for Healthcare | Provider, Payer, Life Science, Telemedicine

1d

  • Report this comment

I really think AI can replace front end call center questions. For instance, colonoscopies should be covered under Obamacare. CPT code 45378 with Diagnosis Z12.11, D12.6 the only way fully understand it is to align data bases. In network facility and provider information easy to find, BUT payer info in seperate databases. You would think information could be connected.

Like Reply

1Reaction

Rahul Batra

Story Teller | Sales Leader l SaaSfollow for content on Sales and business along with my personal journey

1d

  • Report this comment

Unless it is 100% reliable, it is risky in the healthcare sector. Whether we will reach that point or not will always be a question mark!

Like Reply

1Reaction

See more comments

To view or add a comment, sign in

More Relevant Posts

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    Health tech companies promise AI will reduce physicians' and nurses' workloads. But history suggests that AI may make us even busier.In her award-winning 1983 book More Work for Mother, Ruth Schwartz Cowan explains how, paradoxically, mothers spent 𝒎𝒐𝒓𝒆 time on housework 𝒂𝒇𝒕𝒆𝒓 household appliances like washing machines and vacuum cleaners were invented.Why? 1️⃣ Living standards improved.Instead of bland diets, mothers were supposed to bake bread and fix more sophisticated meals. Instead of only washing cuffs and collars, they started laundering entire shirts. Instead of cleaning rugs once every few months, they were expected to vacuum them weekly. 2️⃣ As expectations increased, domestic servants and delivery services declined.As laundresses disappeared, mothers were stuck washing clothing. As milkmen went away, mothers were forced to purchase milk at the grocery store.Combined, this meant more work falling on the backs of mothers. In healthcare, we've seen this same pattern of increased expectations and reduced support with electronic health records. 1️⃣ For example, before EHRs, patients expected to hear back from their physicians in a day or two. Now, they expect to hear back from us immediately. 2️⃣ Similarly, before EHRs, physicians would ask clerks and secretaries to order imaging tests and make referrals. With EHRs, we now place these orders ourselves.While EHRs have improved healthcare services in many ways, like mothers in the 1960s, physicians and nurses now often work harder (not just because of EHRs).Will this pattern of technology creating new tasks and raising expectations repeat itself with healthcare AI? Probably. 1️⃣ Many will expect physicians and nurses to see more patients (to pay for the software) and perform higher-level tasks (if AI offloads the easier work).2️⃣ Meanwhile, if practices use automation to reduce support staff, physicians and nurses may be stuck doing more themselves. (An analogy is how supermarket self-checkout means remaining workers must do more).The point is not that AI/automation is bad. AI has great promise to make healthcare more effective, efficient, and affordable. However, we must continually ask what we gain and what we lose. Critically, we must also recognize that AI efforts will be limited and could backfire if we do not update our systems of care to accommodate these changes.#futureofwork #healthcareai #healthcareonlinkedin

    • Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (17)

    87

    24 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    AI reliability is especially important in healthcare, where the stakes are high.Yet, as OpenAI cofounder Ilya Sutskever explains here, reliability is the biggest bottleneck to making language models truly useful.So, we are stuck searching for ways to use incompletely reliable LLMs. There are three common approaches. But each has holes.1️⃣ Since Gen AI is too unreliable for clinical decisions, let’s embed a clinician in the loop. Yet, like all humans, clinicians cannot maintain vigilance for rare outcomes (automation bias).(For example, attending physicians routinely co-sign their residents’ notes without thoroughly reading them).2️⃣ Alternatively, if Gen AI is not reliable enough for clinical care, let’s apply it to administrative work. Yet, separating “admin” work from “clinical” work is often difficult and sometimes impossible. (As a recent example, a patient on high-dose gabapentin asked me for a letter to return to work as a truck driver).3️⃣ If Gen AI is too unreliable for high-stakes work, let’s apply it to low-stakes work. Yet, if a low-stakes task can be automated, should we even do it at all? As Peter Drucker famously said, "There is nothing quite so useless as doing with great efficiency something that should not be done at all.”I'm all for responsibly using Gen AI in healthcare. The real question is when we should use it.The challenge is that some of the supposedly “right” situations are not as sturdy as they may seem.

    106

    58 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    It’s easy to forget that physicians and nurses—like all people—experience illnesses directly or through loved ones. These experiences shape how we practice.In this touching JAMA essay, a nurse practitioner describes the physical and mental burden of living with severe eczema.During flares, her eczema is all-consuming. She continually reassures her patients that her rashes are not contagious, applies lotion in between patient encounters, avoids fluorescent lights that make her rashes hyper-visible, grieves the loss of her clear-skinned self, and worries she looks ugly despite her husband constantly reminding her she is beautiful.While we should not romanticize suffering, there are sometimes beneficial side effects. For one, she explains, “My rashes have been one of my greatest teachers of humility, empathy, and compassion” — the very essence of healing relationships.She also recognizes how difficult it is to treat herself with the same kindness she gives her patients. I see two key points.First, as many imagine (even fetishize) AI replacing doctors and nurses, we must remember that only fellow human clinicians can know (or imagine) what it feels like to be lost, vulnerable, overwhelmed, or scared. Second, illness or not, we clinicians can cultivate much-needed self-compassion by caring for ourselves as we care for our patients. #healthcareonlinkedinhttps://lnkd.in/gWAuph9a

    Best Versions of Ourselves jamanetwork.com

    29

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    How will physicians and nurses spend any time that AI saves them?In a survey of 640 UK physicians, nurses, and physical therapists, only one in four clinicians reported they’d spend newly freed-up time caring for patients. The others would use the time to complete administrative work, perform continuing education, work on quality improvement, or work less (figure). This is an important counterpoint to promises that AI will make clinicians more productive. Increased productivity does not come simply from freeing up time. It depends on how clinicians use this time. If they don’t spend this time seeing more patients, AI may not boost productivity to the extent imagined. And this should be OK! Many clinicians struggle to keep up with the fast pace and high demands of clinical practice. They want to slow down, not merely switch to another task. Of course, it may not be up to 𝒕𝒉𝒆𝒎. Regardless of what 𝒕𝒉𝒆𝒚 want, their bosses may direct them to see more patients.Even if their bosses let them choose, there’s a long history of technology that was promised to reduce work, ultimately making us even busier.https://lnkd.in/ghxr3nZW#automation #healthcareai #healthcareonlinkedin

    • Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (31)

    111

    39 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    Another retailer (Dollar General) is backing away from healthcare services. Last year, Dollar General—the nation’s largest retailer by number of stores (19,000)—partnered with DocGo to establish mobile clinics in its parking lots. Patients would pay $69 (or use Medicare or Medicaid) and, once inside, meet a medical assistant and connect via telehealth to a remote physician assistant or nurse practitioner. Dollar General is now ending this pilot.I liked the pilot concept when it was announced. For one, many rural Americans lack access to care in their communities. Driving hours away (as many of my patients do) is often a big deal. And though relationship-based primary care is preferable, it's not always an option.Also, I like the mobile unit model, which allows clinicians to do more (vital signs, assisted examination, blood tests, and x-rays) than is usually possible for patients doing telemedicine visits from home.Dollar General did not report why they stopped the pilot, but making models like this work is clearly quite hard. For one, the clinical logistics are complex. For example, what happens when a remote NP uncovers a higher-level need?The more significant challenge (as Walmart Health, VillageMD, One Medical, and others suggest) may be making money in transactional, fee-for-service primary care. Dollar General’s (and Walmart’s) experience suggests that while convenience matters, many may chafe at the idea of receiving healthcare from stores known for deep discounts, not quality.https://lnkd.in/gCVX-uxb

    Another Retailer Bites the Dust. Is There Still Hope for Retailers After Dollar General/DocGo End Pilot? - MedCity News https://medcitynews.com

    32

    15 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    To paraphrase Paul Saffo, in a world of superabundant health content, physicians’ and nurses’ points of view are more important than ever. Yes, we are imperfect. And yes, our knowledge is limited. Yet, our practice and life experiences give us perspectives to help individuals contextualize information and make their best decisions.

    • Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (40)

    32

    7 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    As an “Oregon Trail” physician who’s practiced in both the analog and digitized worlds, here are 10 over-arching thoughts on EHRs and healthcare AI:1️⃣ Digital technologies are polarizing. They affect how we live our lives and make us question our very nature. They conjure both utopian and dystopian images. 2️⃣ Yet, we must resist polarized views and embrace nuance. As Melvin Kranzberg famously explained, “Technology is neither good nor bad, nor is it neutral.” We humans determine the net effects.3️⃣ In the early 2000s, experts and policymakers promised that EHRs would make healthcare far safer, more effective, and more affordable. Now, two decades later, we have not yet realized these promises.4️⃣ Still, EHRs have brought many (under-discussed) benefits and (widely-discussed) side effects.5️⃣ Healthcare AI will follow a similar path to EHRs. For one, we are almost certainly overestimating AI’s short-term effects and underestimating AI’s long-term impact (Amara’s Law). 6️⃣ Also, like EHRs, healthcare AI will be a mixed bag. AI will make healthcare better in some ways and worse in others.7️⃣ Healthcare’s deepest challenges are not technical. As Laurie Anderson explained, “If you think technology will solve your problems, you don’t understand technology – and you don’t understand your problems.”8️⃣ Nonetheless, we (i.e., all healthcare workers) must acknowledge that we are imperfect and embrace AI and other digital tools when appropriate. 9️⃣ This means cutting through the hype, keeping our eyes wide open, and, with each technology, continually asking, “What do we gain? What do we lose?”🔟Ultimately, we must consider who we are, clarify what we do, and imagine how we can do it better—whether or not that involves new technology.I recently discussed these points in the presentation I linked to in the comments. What are your (preferably nuanced!) thoughts on this centrally important topic?

    • Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (45)

    84

    30 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    Can we have art without artists? Or healing without healers?In this thoughtful essay, Jon co*ckley explains that visiting an art exhibit is about more than the artwork; it is also about the person who created it. Would you ever visit a museum to view AI-generated art? Probably not. Why? Because “art without an artist is meaningless.”He concludes that because machines can instantly produce art, the actual art-making process is more important than ever. And because human-generated artwork takes more time, it will be more valued and revered than ever.(I’d add that while AI-generated art may not work for galleries and museums, it’s often fine for PowerPoint presentations and office walls).There are clear parallels to healthcare.AI can instantaneously generate output to guide individuals’ behaviors and health decisions.Notwithstanding reliability issues, AI output alone will sometimes be good enough, even preferable (e.g., basic triage, addressing straightforward ailments, and even some aspects of chronic care).But often, healing relationships matter. A lot. Consider how therapeutic relationships enhance the placebo effect. [For example, see: DOI: 10.1136/bmj.39524.439618.25].Or how adults with chronic back pain are treated by more empathetic physicians achieve significantly better outcomes (less pain, better function, higher quality of life). [doi:10.1001/jamanetworkopen.2024.6026]Just like art, in a world of AI abundance, the healing process will become more important than ever. And healing relationships will be be more valued and revered than ever.#healthcareonlinkedinhttps://lnkd.in/gKT5iiVt

    Rothko versus The Robots: How I learned how to stop worrying about AI killing our creativity creativeboom.com

    26

    8 Comments

    Like Comment

    To view or add a comment, sign in

  • Spencer Dorn

    Spencer Dorn is an Influencer

    Vice Chair & Professor of Medicine, UNC

    • Report this post

    In a world with Google, Reddit, WebMD, and now ChatGPT, patients are increasingly informed about their health conditions and management options.Democratizing information is a net positive. It is necessary for shared medical decision-making, which aligns physicians’ expertise with patients’ preferences.Yet, we all know there are challenges.For one, online and AI-generated information can be inaccurate. And even accurate information presented (or interpreted) out of context may be misguiding. Earlier this week, I saw a hospitalized patient who was (wrongly) convinced that H Pylori was the reason he couldn't eat anything.Second, some patients may feel their doctors expect too much from them, and others may feel their doctors aren’t listening to their input. I care for many patients who simply want me to decide for them and others who have trouble letting go of clearly wrong (even harmful) theories.Third, physicians may be frustrated when patients bring a litany of ideas and even act as if they are the experts in the room. The average primary care visit lasts only 18 minutes [DOI:10.1097/MLR.0000000000001450].These authors—a statistician and physician—recommend boosting shared decision-making by giving patients brief “homework” to prepare for visits and then using a script to guide a productive discussion during time-limited visits. The goal is to blend two types of expertise – clinicians’ expertise in medicine and patients’ expertise in themselves. This approach seems sound, but implementing even straightforward practices like this is often challenging.In a world of near-infinite information, perspective and trust are the scarcest resources. This is where clinicians and healthcare organizations must double down.https://lnkd.in/g64NtUas

    When Patients Do Their Own Research theatlantic.com

    83

    45 Comments

    Like Comment

    To view or add a comment, sign in

Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (58)

Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (59)

8,842 followers

  • 293 Posts
  • 1 Article

View Profile

Follow

More from this author

  • Insider Tips for Navigating Healthcare Spencer Dorn 9mo

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Spencer Dorn on LinkedIn: AI agents are having a 'ChatGPT moment' as investors look for what's next… | 21 comments (2024)
Top Articles
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5464

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.