Fri. Aug 1st, 2025

July 2025: All AI updates from the past month

pexels mart production 7163010


pexels mart production 7163010pexels mart production 7163010

Google’s new Opal tool allows users to create mini AI apps with no coding required

Google has launched a new experimental AI tool designed for users who want to build apps entirely using AI prompts, with no coding needed at all.

Opal allows users to create mini AI apps by chaining together AI prompts, models, and tools, using natural language and visual editing.

“Opal is a great tool to accelerate prototyping AI ideas and workflows, demonstrate a proof of concept with a functional app, build custom AI apps to boost your productivity at work, and more,” Google wrote in a blog post.

The tool consists of a visual editor to help creators see the workflows in their apps and connect different prompts together to build multi-step apps. It allows the user to describe the logic they want in the app and have Opal build the workflow for them. Users will be able to edit the generated workflow either in the visual editor or through additional prompts.

Gemini 2.5 Flash-Lite is now generally available

The model is Google’s fastest and cheapest model, costing $0.10/1M tokens for input and $0.40/1M tokens for output (compared to $1.25/1M tokens for input and $10/1M tokens for output in Gemini 2.5 Pro).

“We built 2.5 Flash-Lite to push the frontier of intelligence per dollar, with native reasoning capabilities that can be optionally toggled on for more demanding use cases. Building on the momentum of 2.5 Pro and 2.5 Flash, this model rounds out our set of 2.5 models that are ready for scaled production use,” Google wrote in a blog post.

GitLab Duo Agent Platform enters beta

GitLab Duo Agent Platform is an orchestration platform for AI agents that work across DevSecOps in parallel. For instance, a user could delegate a refactoring task to a Software Developer Agent, have a Security Analyst Agent scan for vulnerabilities, and have a Deep Research Agent analyze progress across the repository.

Some of the other agents that GitLab is building as part of this include a Chat Agent, Product Planning Agent, Software Test Engineer Agent, Code Reviewer Agent, Platform Engineer Agent, and Deployment Engineer Agent.

The first beta is available for GitLab.com and self-managed GitLab Premium and Ultimate customers. It includes a VS Code extension and JetBrains IDEs plugins, and next month the company plans to add it to GitLab and expand IDE support.

Google adds updated workspace templates in Firebase Studio that leverage new Agent mode

Google is adding several new features to its cloud-based AI workspace Firebase Studio, following its update a few weeks ago when it added new Agent modes, support for MCP, and integration with the Gemini CLI.

Now it is announcing updated workspace templates for Flutter, Angular, React, Next.js, and general Web that use the Agent mode by default. Users will still be able to toggle between the “Ask” and Agent mode, depending on what the task at hand calls for.

The templates now have an airules.md file to provide Gemini with instructions for code generation, like specific coding standards, handling methods, dependencies, and development best practices.

Google says it will be updating templates for frameworks like Go, Node.js, and .NET over the next few weeks as well.

ChatGPT now has an agent mode

OpenAI is bringing the power of agentic AI to ChatGPT so that it can handle complex requests from users autonomously.

It leverages two of OpenAI’s existing capabilities: Operator, which can interact with websites, and deep research, which can synthesize information. According to OpenAI, these capabilities were best suited for different situations, with Operator struggling with complex analysis and deep research being unable to interact with websites to refine results or access content that required authentication.

“By integrating these complementary strengths in ChatGPT and introducing additional tools, we’ve unlocked entirely new capabilities within one model. It can now actively engage websites—clicking, filtering, and gathering more precise, efficient results. You can also naturally transition from a simple conversation to requesting actions directly within the same chat,” the company wrote in a blog post.

YugabyteDB adds new capabilities for AI developers

The company added new vector search capabilities, an MCP Server, and built-in Connection Pooling to support tens of thousands of connections per node.

Additionally, it announced support for LangChain, OLLama, LlamaIndex, AWS Bedrock, and Google Vortex AI. Finally, YugabyteDB now has multi-modal API support with the addition of support for the MongoDB API.

“Today’s launch is another key step in our quest to deliver the database of choice for developers building mission-critical AI-powered applications,” said Karthik Ranganathan, co-founder and CEO, Yugabyte. “As we continuously enhance YugabyteDB’s compatibility with PostgreSQL, the expanded multi-modal support, a new YugabyteDB MCP server, and wider integration with the AI ecosystem provide AI app developers with the tools and flexibility they need for future success.”

Composio raises $29 million in Series A funding

The company is trying to build a shared learning layer for AI agents so that they can learn from experience. “You can spend hundreds of hours building LLM tools, tweaking prompts, and refining instructions, but you hit a wall,” said Soham Ganatra, CEO of Composio. “These models don’t get better at their jobs the way a human employee would. They can’t build context, learn from mistakes, or develop the subtle understanding that makes human workers invaluable. We’re solving this at the infrastructure level.”

This funding round will be used to accelerate the development of Composio’s learning infrastructure. The round was led by Lightspeed Venture Partners, with participation from Vercel’s CEO Guillermo Rauch, HubSpot’s CTO and founder Dharmesh Shah, investor Gokul Rajaram, Rubrik’s co-founder Soham Mazumdar, V Angel, Blitzscaling Ventures, Operator Partners, and Agent Fund by Yohei Nakajima, in addition to existing investors Elevation Capital and Together Fund.

Parasoft brings agentic AI to service virtualization in latest release

The company added an agentic AI assistant to its virtual testing simulation solution Virtualize, allowing customers to create virtual services using natural language prompts.

For example, a user could write the prompt: “Create a virtual service for a payment processing API. There should be a POST and a GET operation. The operations should require an account id along with other data related to payment.”

The platform will then draw from the provided API service definitions, sample requests/responses, and written descriptions of a service to generate a virtual service with dynamic behavior, parameterized responses, and the correct default values.

Slack’s AI search now works across an organization’s entire knowledge base

Slack is introducing a number of new AI-powered tools to make team collaboration easier and more intuitive.

“Today, 60% of organizations are using generative AI. But most still fall short of its productivity promise. We’re changing that by putting AI where work already happens — in your messages, your docs, your search — all designed to be intuitive, secure, and built for the way teams actually work,” Slack wrote in a blog post.

The new enterprise search capability will enable users to search not just in Slack, but any app that is connected to Slack. It can search across systems of record like Salesforce or Confluence, file repositories like Google Drive or OneDrive, developer tools like GitHub or Jira, and project management tools like Asana.

“Enterprise search is about turning fragmented information into actionable insights, helping you make quicker, more informed decisions, without leaving Slack,” the company explained.

The platform is also getting AI-generated channel recaps and thread summaries, helping users catch up on conversations quickly. It is introducing AI-powered translations as well to enable users to read and respond in their preferred language.

Anthropic’s Claude Code gets new analytics dashboard to provide insights into how teams are using AI tooling

Anthropic has announced the launch of a new analytics dashboard in Claude Code to give development teams insights into how they are using the tool.

It tracks metrics such as lines of code accepted, suggestion acceptance rate, total user activity over time, total spend over time, average daily spend for each user, and average daily lines of code accepted for each user.

These metrics can help organizations understand developer satisfaction with Claude Code suggestions, track code generation effectiveness, and identify opportunities for process improvements.

Mistral launches first voice model

Voxtral is an open weight model for speech understanding, that Mistral says offers “state-of-the-art accuracy and native semantic understanding in the open, at less than half the price of comparable APIs. This makes high-quality speech intelligence accessible and controllable at scale.”

It comes in two model sizes: a 24B version for production-scale applications and a 3B version for local deployments. Both sizes are available under the Apache 2.0 license and can be accessed via Mistral’s API.

JFrog releases MCP server

The MCP server will allow users to create and view projects and repositories, get detailed vulnerability information from JFrog, and review the components in use at an organization.

“The JFrog Platform delivers DevOps, Security, MLOps, and IoT services across your software supply chain. Our new MCP Server enhances its accessibility, making it even easier to integrate into your workflows and the daily work of developers,” JFrog wrote in a blog post.

JetBrains announces updates to its coding agent Junie

Junie is now fully integrated into GitHub, enabling asynchronous development with features such as the ability to delegate multiple tasks simultaneously, the ability to make quick fixes without opening the IDE, team collaboration directly in GitHub, and seamless switching between the IDE and GitHub. Junie on GitHub is currently in an early access program and only supports JVM and PHP.

JetBrains also added support for MCP to enable Junie to connect to external sources. Other new features include 30% faster task completion speed and support for remote development on macOS and Linux.

Gemini API gets first embedding model

These types of models generate embeddings for words, phrases, sentences, and code, to provide context-aware results that are more accurate than keyword-based approaches. “They efficiently retrieve relevant information from knowledge bases, represented by embeddings, which are then passed as additional context in the input prompt to language models, guiding it to generate more informed and accurate responses,” the Gemini docs say.

The embedding model in the Gemini API supports over 100 languages and a 2048 input token length. It will be offered via both free and paid tiers to enable developers to experiment with it for free and then scale up as needed.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *