When you need to look something up, you type. You scroll. You read. For most people, this is second nature. For the 183,000 Scots living with sight loss — and millions more worldwide who struggle with text-based interfaces — it has always been a barrier.
Last week, Google took a significant step toward dismantling it.
What Gemini Flash Live actually does
On March 26, Google DeepMind launched Gemini 3.1 Flash Live, its most advanced voice AI model to date, alongside a global rollout of Search Live to more than 200 countries. The combination means something genuinely new: you can now have a spoken conversation with Google's AI while it simultaneously searches the web for answers.
Ask it about a recipe while your hands are covered in flour. Point your phone camera at a plant and ask what it is. Query it about train times while walking down the street. The AI responds in under a second, maintains the thread of your conversation, and pulls in live search results — without you ever needing to touch a keyboard.
"Gemini 3.1 Flash Live delivers the speed and natural rhythm needed for the next generation of voice-first AI," Google said in its announcement, noting the model is inherently multilingual and can filter out background noise from traffic or television.
How it compares
The voice AI race is intensifying. OpenAI's ChatGPT voice mode already offers fluent spoken conversations, though it lacks integrated web search during live dialogue. Amazon recently launched Alexa+, its AI-upgraded assistant, in the UK. Apple is reportedly planning a major Siri overhaul for iOS 27, expected later this year.
What sets Gemini Flash Live apart is the marriage of voice and real-time search. Rivals offer one or the other; Google is offering both simultaneously, woven into the search engine that already dominates how most people find information online.
The accessibility story
This is where the technology moves from impressive to potentially life-changing.
For people with visual impairments, reading difficulties, or motor disabilities that make typing painful, conventional web search has always demanded workarounds. Screen readers help, but they still require navigating text-heavy pages. Voice assistants like Alexa and Siri can answer simple questions, but fall short on the kind of complex, multi-step queries that search handles daily.
A voice-first AI that can search the web conversationally changes that equation. Need to compare energy tariffs? Research a medical symptom? Find out what's on at the local cinema? You simply ask — and the AI does the reading for you.
RNIB Scotland has long campaigned for more accessible digital services. In the charity's 2026 Scottish Parliament election manifesto, director James Adams urged a focus on "accessible information and transport services." While his comments addressed broader policy goals rather than any specific product, the principle is directly relevant: if search becomes something you speak rather than type, a significant digital barrier falls away.
The scale of it
This is not a niche experiment. Google reported in February that Gemini has 750 million monthly active users, making it one of the most widely used AI platforms in the world. Search Live is now available in over 200 countries and territories. When a feature reaches that kind of scale, it stops being a tech story and becomes a social one.
All audio generated by the model is watermarked with Google's SynthID technology to help identify AI-generated content — a sensible precaution as voice AI becomes harder to distinguish from human speech.
What comes next
The real test will be whether voice-plus-search actually changes habits. Technology only matters if people use it. But for those who have always found the web's text-first design exclusionary, a spoken conversation with a search engine is not just convenient.
It might be the first time the internet truly talks back.



