As the Managing Director of Summit Security Group, I had the incredible opportunity to moderate a thought-provoking panel on the intersection of generative artificial intelligence and healthcare technology. Hosted by the Technology Association of Oregon, our event featured insightful discussions with prominent experts: Brooke Cowan, Tisson Mathew, John Harkness, and Lloyd Fobi. In this blog post, we delve into a few key takeaways from the panel, shedding light on crucial topics that emerged during our discourse.
Health Data Privacy in the Age of Generative AI: Striking the Balance
In the realm of healthcare technology, data privacy is critical. Even minor data breaches can lead to expensive penalties and, in the worst instances, undermine patient care in ways that puts lives at risk. The fusion of generative AI and healthcare brings about unprecedented possibilities, but it also raises intricate concerns about safeguarding sensitive patient information. As we harness the power of AI to generate medical data, ensuring compliance with regulations like the HIPAA Privacy Rule becomes paramount. A strong understanding of how the Protected Health Information (PHI) is stored, used, handled and flows is necessary to maintain data privacy and stewardship. This is a concern especially when considering the use of open generative AI models. Closed, or vertical generative AI models have different concerns.
Capital Expenditure Implications for Generative AI Startups
While the potential for innovation in generative AI may seem limitless, startups in this domain must be mindful of Capital Expenditure (CapEx) implications. Our panelists explored how these startups face unique challenges in terms of hardware costs, computational resources, utility costs for computing power, and research and development investments.
For Generative AI to operate efficiently requires special Graphics Processing Units (GPUs). GPUs differ from Central Processing Units (CPUs) basically in that they are really good at maths. That mathematical computing power traditionally was used to improve computer graphics, and then later to mine cryptocurrency. Now special GPUs, like the Nvidia H100s, are being used to power Generative AI. A single Nividia H100 costs about $30,000 and has at least 80GB of RAM. A vertical AI model will require multiple of these types of cards used in conjunction in a single server with an upfront cost well into six figures (each). These computing units are also extremely resource-intensive, requiring a large amount of electrical power and cooling.
There are additional supply and demand concerns regarding generative AI chips. There are currently no chip foundries in the United States, and the only foundaries currently producing the necessary components for these chips are in Taiwan.
“Investors who understand the long-term vision of generative AI, including the possible limitations of chips and capital, are crucial. Striking a balance between innovation and financial sustainability will be instrumental in nurturing the growth of this technology.”
Enhancing Healthcare Quality Metrics with Generative AI
Generative AI is very good at certain things, and one of those is processing and analyzing large amounts of data, such as healthcare quality metrics. It has the transformative potential to enhance healthcare quality metrics across the board. From accelerating drug discovery through molecular modeling to enabling personalized treatment plans based on patient data, the applications are profound. The integration of AI-powered solutions can lead to more efficient diagnoses, optimized treatment plans, and ultimately, improved patient outcomes. For example, Swedish researchers improved breast cancer diagnosis by 20% with the help of AI.
Unveiling the Threat of Prompt Injection Attacks on Generative AI Applications
No conversation about AI is complete without addressing the associated risks. During the panel discussion, we primarily discussed non-adversarial threats to generative AI. What we didn’t get a chance to cover are adversarial threats specific to AI. Prompt injection attacks have emerged as a significant concern in generative AI applications. These attacks involve manipulating the input data to generate biased or malicious outputs. The need for rigorous application security testing and continuous monitoring is critical to the AI tech sector as a whole and not just healthcare technology. As we increasingly utilize AI-generated insights, preemptive measures against prompt injection attacks will be pivotal in maintaining the integrity of the responses.
The panel discussion at the Technology Association of Oregon left us with a heightened sense of anticipation for the future of healthcare technology. The convergence of generative AI and healthcare holds immense promise, but it also demands a holistic approach to security, ethics, and responsible innovation.
At Summit, we are committed to staying at the forefront of technological advancements and their implications for cybersecurity. As the landscape evolves, we will continue to collaborate, educate, and fortify our clients against emerging threats. Keep an eye on our blog for more insights, analyses, and explorations into the dynamic world of cybersecurity and technology.