AI is rapidly changing the future of healthcare. Its ability to analyze complex medical and research data is helping improve diagnostics, drug development, and our overall knowledge of healthcare! But along with its promise comes serious ethical questions, particularly around equity, trust, and inclusion. These topics were at the center of the “Code, Context, and Care” symposium, hosted by the Cobb Institute on Sunday, July 20, during the National Medical Association conference, bringing together experts in medicine, informatics, public health, and education to explore how AI can responsibly support healthcare delivery.
AI presents transformative opportunities in healthcare—from diagnostics to population health—but without thoughtful design, it can also amplify existing inequities. As noted by panelists like Dr. Alison Whelan and Dr. Hassan Tetteh, biased datasets, opaque algorithms, or poorly validated tools can undermine clinical trust, misguide interventions, and further marginalize vulnerable populations.
Dr. Gilles Gnacadja, PhD, a research strategist at Amgen, provided a critical industry perspective on the ethical integration of AI in clinical research and development. He emphasized that for AI to be truly impactful, it must be:
From a biopharmaceutical standpoint, Dr. Gnacadja underscored the responsibility of industry leaders to implement AI with clinical validity and ethical guardrails, especially when these tools influence real-world treatment decisions. His remarks were a strong reminder that advanced AI must serve all patients—not just those best represented in training datasets.
For healthcare professionals, the takeaway is clear: our engagement and oversight are essential to ensuring AI enhances care without compromising equity or trust.
This year’s symposium featured a dynamic roster of panelists and speakers representing diverse expertise and lived experience:
When AI is built using incomplete or biased data, it can lead to serious consequences—especially for Black patients. For example, if an algorithm assumes healthcare costs reflect health needs, it may overlook those who face barriers to accessing care. To make AI work for everyone, we need data that truly represents our communities.
AI can make it easier to match people to clinical trials, which are often the gateway to cutting-edge treatments. But Black patients are still underrepresented in research. That means we risk missing out on care designed with us in mind. Equity in trial access is essential to creating health solutions that actually serve our communities.
Doctors are learning to use AI as part of their medical training. But it’s not just about learning the technology—it’s about recognizing when AI tools might be biased or harmful. We need to make sure all future doctors, especially those from underrepresented backgrounds, are prepared to use AI in ways that respect and protect every patient.
AI can help doctors make more informed decisions, but it should never replace human judgment. Patients deserve care that considers their full story—not just what a computer model predicts. That’s why clinical oversight and patient-centered thinking must always come first.
AI tools used in things like X-rays, orthopedic care, or pregnancy monitoring need to work for people of all skin tones, body types, and backgrounds. If these tools aren’t tested on diverse groups, they may miss key health issues. Patients have the right to know that the tools guiding their care are accurate—and fair.
BlackDoctor.org will continue to share AI trends and takeaways to keep you up-to-date!

By subscribing, you consent to receive emails from BlackDoctor.com. You may unsubscribe at any time. Privacy Policy & Terms of Service.