
Exploring the Promises and Pitfalls of AI in Diabetes Care: Innovations, Challenges, and the Road Ahead
“Artificial intelligence is the foundation of emerging medical technologies; it will power the future of diagnosing diabetes complications” (Ayers et al., 2024).

Imagine this: you’re sitting in the waiting room at your doctor’s office, flipping through a magazine or scrolling your phone, when a nurse calls your name. You follow her down the hallway. Instead of a human doctor, you’re greeted by a robot in a lab coat, scanning you with X-ray vision and infrared sensors.
Sounds like science fiction? Maybe for now, but not for long. As technology continues to evolve at lightning speed, the idea of AI in the doctor’s office is no longer a distant fantasy. It’s already beginning to reshape how we diagnose, treat, and care for diabetic patients. The future is closer than you think.
Understanding Diabetes Beyond the Stereotypes
So, what exactly is diabetes – and what comes to mind when we hear the word? Often, it’s someone who is overweight, eats poorly and doesn’t exercise. But that’s a misconception.
The World Health Organization defines diabetes as “a chronic, metabolic disease characterized by elevated levels of blood glucose (or blood sugar), which leads over time to serious damage to the heart, blood vessels, eyes, kidneys and nerves” (World Health Organization, 2024).
There are two primary types: Type 1 is an autoimmune condition where the body attacks insulin-producing cells, leading to a lifelong dependence on insulin injections or pumps. Type 2 occurs when the body resists the insulin it produces, often developing gradually and influenced by lifestyle.
So What Does AI Have to Do With This?
We’re living in a world increasingly powered by AI, whether that’s using ChatGPT to draft an email to your boss or even selecting a ‘Recommendation For You’ movie on Netflix. These systems operate through algorithms: sets of ‘rules’ used for tasks like data processing, calculations, and decision-making (Flew, 2021).
There is an urgent need to understand and acknowledge the prevalence of diabetes within our world; the possibility of AI use within diabetic medicine could change and save lives. With AI, we can build predictive models, automate risk assessments, and personalize disease management strategies (Wang et al., 2024).
However, Behind the promise of precision and efficiency lies a darker side. One that raises urgent questions about data protection, bias, and accessibility.
Catching Diabetes Early: AI-Powered Diagnosis
One of AI’s most promising roles in diabetic medicine is early detection. Machine learning (ML), a subset of AI, has shown real promise for detection. These machines, whilst using learning techniques, demonstrated incredible accuracy in predicting disease development in patients. (Nwanua, 2024).
Win for team AI!
Traditionally, doctors rely on blood sugar tests and patient history. AI, however, can analyze massive datasets—genetics, lifestyle habits, early symptoms—and predict who is most at risk. The ability to identify high-risk individuals and thus personalise their prevention strategies (and potential personalised treatments) could be able to delay the emergence of diabetes and therefore future health complications for a patient (Wang et al., 2024).
Case Study: Diabetic Retinopathy Detection
Google’s DeepMind developed a type of AI that can analyse eye scans and detect diabetic retinopathy. After being approved by the FDA in 2018, it had a remarkably high diagnostic accuracy of recognising the traits in retinal screening images with up to 95% accuracy. Since this development, the FDA has further approved AI technologies to assist with optimising insulin dosing as well as personalised patient therapy (Wang et al., 2024).
Diabetic retinopathy is one of the leading causes of blindness, and many patients don’t realize they have it until after vision loss begins. AI makes earlier, more accurate screening possible—and much faster than human doctors (Nwanua, 2024).
Earlier detection means not only better treatment outcomes but also AI can screen patients far faster than human doctors, helping our understaffed and underfunding healthcare systems around the world.
AI-Powered Glucose Monitoring & Insulin Adjustments
Anyone living with diabetes knows how unpredictable blood sugar can be. The wrong meal or missed dose, and you’re dealing with a spike or crash. But lucky for you, AI is changing the game.
Through technological advancements like our phones (specifically features such as camera, location, finance apps etc), collective data can give a better understanding of how countries can address significant economic and health focused challenges with more precision and greater adaptability (World Economic Forum, 2011).
AI Systems are helping glucose monitoring in a multitude of ways. AI-driven continuous glucose monitors (CGMs) predict blood sugar fluctuations up to 60 minutes in advance, potentially saving lives.
Devices like the Dexcom G7 use AI to offer real-time insights and personalized alerts by connecting to devices around them.
Because of monitoring systems like the Dexcome G7, there are fewer dangers of blood sugar highs and lows. There is also less reliance on traditional methods such as finger prick tests (no one wants to be stabbing themselves with a needle every hour do they?) and it makes the managing of diabetes more automated and less stressful!
Watch the Dexcom G7 in action here:
But What About the Downsides?
Wow, aren’t there so many reasons to be grateful for the AI, algorithm and data age we live in?
Well…
I hate to break it to you but for all its potential, AI in healthcare just isn’t perfect. There are serious challenges, from privacy risks to algorithmic bias, that can impact how useful these tools really are.
Your personal health data is now stored across countless databases – not just in hospitals, but on apps, devices, and platforms. This makes data an incredibly valuable resource to companies. In fact, the World Economic Forum dubbed it “the new oil”— the real asset of the 21st century (World Economic Forum, 2011).
However, this blog is not revolutionary (surprisingly…) in its doubts about the algorithmic dependence of the healthcare world; these privacy issues were raised during the COVID-19 pandemic. Each country’s Public Health Authorities needed to access information about the movement and interactions of those infected with the virus. This led to the development of a multitude of apps to harvest this information, with the help of companies such as Google and Apple (Flew, 2021). This sparked questions regarding who should have access to our personal health data and at what point do we lose control of how it is used?
The question remains: who owns your data?
Case Study: Retinopathy Screening & Data Misuse
A 2021 study published in BMC Medical Ethics found that many AI-powered diabetic screening programs store patient data without clear transparency. Some of this data is used for commercial purposes without patients even knowing. (Murdoch, 2021)
That same year, 686 healthcare databases were breached, releasing 45 million records which were exposed, stolen or both (Alder, 2021).
With AI evolving at a rapid speed, everyone wants a piece of the action. The result? A surge in commercialisation, as companies race to stake their claim in the future of healthcare. This use of personal data as an economic commodification is undermining user confidence and trust.
And, because of this commercialisation, many of these AI companies end up being owned by private entities. The implementation of their AI technology within diabetic medicine would mean these companies will have a greater role in the obtaining and utilization of patient data (Murdoch, 2021).
So, understandably, the concerns about the misuse of personal data by ‘big data’ continue to grow (World Economic Forum, 2011).
But what can we do?
Not much, unfortunately.
These private owners of our data gathered through AI practices, are also subject to competing goals between other companies, directly impacting the patients, or their ‘data’. These private companies should be encouraged to ensure that our data is protected and not used further (Murdoch, 2021).
There are, therefore, so many risks associated with the use of AI within not only diabetic medicine but the medical field as a whole. Patients dont know who has their data and with Big Data companies (like Google, Amazon, and Apple) being increasingly involved in healthcare technology – who can we trust?
And as Murdoch points out, they are often driven by business goals—not public health.
AI Bias: When the Algorithm Gets It Wrong
What is one of AI’s biggest weaknesses? It isn’t always fair. The data used to train AI models can contain hidden biases, which means AI may work better for some people than others.
Because of the way that AI is ‘trained’, it has several unique characteristics compared to the ‘traditional’ medical technologies – it can be prone to bias.
These errors and biases sometimes cannot be easily supervised or detected by the human medical professionals who monitor the AI technology. (Murdoch, 2021)
One example within Diabetic medicine is AI failing to diagnose diabetes in minority groups.
A 2024 NPJ study found that AI models underestimated diabetes risk in Black and Hispanic populations because they were trained on predominantly white patient data, learning from only one subject group. That means people of colour could be misdiagnosed or receive inadequate care.
What makes this even more distressing is that for Black and Mexican Americans, diabetes is more prevalent than when compared to White Americans. If AI tools aren’t trained on diverse datasets, they risk reinforcing existing health inequalities (Wang et al., 2024).
The Accessibility Problem: Who Gets Left Behind?
AI-driven diabetes tools aren’t cheap or accessible in LMIC (Low and Middle Income Countries). Of the nearly half a billion people across the world living with Diabetes, 80% of them are in LMICS; Type 2 Diabetes (affecting 9 out of 10 diabetic patients) is now surging in these regions (Nwanua, 2024).
Within LMICs, the limited budgets of the government, and therefore their health systems, restrict their ability to purchase these AI tools, even when recognising their potential value (Nwanua, 2024).
AI-powered apps can cost hundreds of dollars a year. Devices like Dexcom cost thousands.
For LMICs, AI-powered diabetes care is nearly non-existent, let alone available to the vast majority of those affected, with only the wealthy having access to these life-changing tools.
The result? A widening gap between those who can afford AI-driven care and those who can’t.
Final Verdict: AI in Diabetic Medicine—Blessing or Curse?
AI is already transforming how we diagnose and manage diabetes. It’s helping patients predict their glucose fluctuations, automate insulin delivery via pumps and get faster diagnosis. But at the same time, privacy issues, algorithmic bias, and high costs could make it a tool for the privileged, rather than a universal solution.
With private companies controlling much of the innovation, we must keep patient care—not profit—at the centre. That means including clinicians, researchers, entrepreneurs, and patients themselves in shaping how AI is developed and used, not just the stakeholders (Wang et al., 2024).
Looking Ahead: What Needs to Change
If we want AI to truly revolutionize diabetes care for everyone, we need:
- Stronger privacy protections to keep patient data safe.
- More diverse training within the AI development stage to eliminate data bias.
- Lower costs and greater accessibility of developing AI tools, especially in LMICs.
As AI continues to evolve and absorb new data at an incredible rate, its potential to revolutionise diabetic healthcare is undeniable. But while we celebrate the breakthroughs and efficiencies, we cannot turn a blind eye to the risks. Data privacy, unequal access, and the growing threat of breaches and stolen health records are no longer distant concerns—they are here. The future of medicine may be digital, but it’s up to us to ensure it remains ethical, secure, and human at its core.

References
Alder, S. (2021, December 30). Largest Healthcare Data Breaches of 2021. The HIPAA Journal. https://www.hipaajournal.com/largest-healthcare-data-breaches-of-2021
Ayers, A. T., Ho, C. N., Kerr, D., Cichosz, S. L., Mathioudakis, N., Wang, M., Najafi, B., Moon, S.-J., Pandey, A., & Klonoff, D. C. (2024). Artificial Intelligence to Diagnose Complications of Diabetes. Journal of Diabetes Science and Technology, 19(1), 246–264. https://doi.org/10.1177/19322968241287773
Dexcom G7 Receives CE Mark – Next-Generation Continuous Glucose Monitoring System to Revolutionize Diabetes Management. (2015, March 14). Buisness Wire. https://www.businesswire.com/news/home/20220309005998/en/Dexcom-G7-Receives-CE-Mark-Next-Generation-Continuous-Glucose-Monitoring-System-to-Revolutionize-Diabetes-Management
Flew, T. (2021). Regulating Platforms. Polity Press.
Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22(1). https://doi.org/10.1186/s12910-021-00687-3
Nwanua, M. (2024). Advancement in Diabetes Research: Accessibility of AI-powered Tools for Early Diabetes Detection in LMICs. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4801815
Wang, S., Nickel, G., Venkatesh, K. P., Raza, M. M., & Kvedar, J. C. (2024). AI-based diabetes care: Risk prediction models and implementation concerns. Npj Digital Medicine, 7(1). https://doi.org/10.1038/s41746-024-01034-7
World Economic Forum. (2011). Personal Data: The Emergence of a New Asset Class. https://www3.weforum.org/docs/WEF_ITTC_PersonalDataNewAsset_Report_2011.pdf
World Health Organization. (2024). Diabetes. World Health Organization; World Health Organization. https://www.who.int/health-topics/diabetes#tab=tab_1
Be the first to comment