Outlook Business Desk
Google has unveiled two new healthcare-focused artificial intelligence models, MedGemma 1.5 and MedASR, reinforcing its push towards open-access medical AI for researchers and developers.
Unlike competitors offering paid enterprise healthcare tools, Google has released both models publicly, allowing the wider research and developer community to explore, adapt and build upon them.
MedGemma 1.5 is Google’s latest medical vision-language model, designed to analyse medical images alongside text and assist with research-driven tasks involving visual healthcare data.
According to Google Research, MedGemma 1.5 delivers stronger multimodal reasoning and improved handling of complex medical imagery, while supporting fine-tuning for specialised datasets and study needs.
The model works with radiology scans and other clinical images, supporting research tasks such as visual question answering, medical report creation and structured data extraction.
Google emphasised that MedGemma 1.5 is not intended for diagnosis or treatment decisions and should only support research and development, not direct patient care.
Alongside MedGemma, Google also launched MedASR, a healthcare-focused speech recognition model designed to accurately transcribe clinical conversations while recognising medical terminology and different accents.
Google said MedGemma and MedASR are available through Hugging Face and Vertex AI, while documentation and tutorials can be accessed via the MedGemma GitHub repository.