Copilot Cowork Brings Long-Running, Multi-Step AI Task Execution to Microsoft 365 via Frontier Programme
If you use Microsoft 365 for daily work in Outlook, Teams, or Excel, Copilot Cowork is worth paying close attention to because it fundamentally changes what the assistant is actually capable of doing. Rather than answering a single prompt and stopping, Cowork accepts a goal, breaks it into a sequence of steps, works across your apps and files, and delivers a finished output while you get on with other tasks.
What makes this architecturally significant is the multi-model setup running underneath: GPT handles the initial draft, and then Anthropic's Claude reviews it for accuracy and verifies citations before the result reaches the user. That "Critique" capability is complemented by a "Council" mode, which lets users compare responses from different models side by side to choose the best answer. Copilot Cowork launched on 30 March 2026 and is currently available to a limited group through Microsoft's Frontier early-access programme, with broader rollout expected across the coming weeks.
Gemini 2.5 Flash-Lite Preview API Endpoint Retired on 31 March 2026
If you have applications or automated pipelines calling the gemini-2.5-flash-lite-preview-09-2025 endpoint, today is the day those calls stop working. Google retired this preview model on 31 March 2026 as part of its ongoing deprecation cycle for older Gemini preview endpoints. For the majority of production developers who already migrated to the Gemini 3.x series, this will have no impact.
For anyone still relying on the older preview endpoint, this is an immediate prompt to update your integrations to a current supported version such as gemini-3.1-flash. The wider takeaway is that Google is running a faster deprecation cadence than many developers may have expected: preview endpoints introduced in late 2025 are now being retired within months of launch, which means any integration built on a "-preview" model string needs active monitoring against the published deprecation schedule.
Industry Themes
The multi-model era has arrived in enterprise software. Microsoft shipping Copilot Cowork with GPT drafting and Claude reviewing is the clearest signal yet that enterprise AI platforms are abandoning single-model loyalty in favour of composing the best model for each sub-task. For anyone evaluating AI infrastructure, the lesson is that model-agnostic orchestration is now a first-class product feature, not an experimental research architecture.
API deprecation cycles are accelerating. Google's retirement of the Gemini 2.5 Flash-Lite Preview endpoint today is part of a tightening pattern in which preview model versions are discontinued within months of launch across every major provider. Developers building on "-preview" API endpoints from any vendor should treat deprecation schedule monitoring as a standard part of their integration maintenance.
A quiet 24 hours at month-end reveals a wider pattern: after one of the most announcement-dense weeks in recent AI history (spanning GPT-5.4, Gemini 3.1, Mistral Voxtral TTS, and the Claude Mythos leak), the industry appears to be pausing for breath. The current story is less about what is launching and more about what developers and enterprises will actually adopt from the recent wave of releases.