This week in AI dev tools: Gemini API Batch Mode, Amazon SageMaker AI updates, and more (July 11, 2025)

uttu
7 Min Read


pexels jeshoots 238118 1pexels jeshoots 238118 1

Gemini API gets Batch Mode

Batch Mode allows large jobs to be submitted through the Gemini API. Results are returned within 24 hours, and the delayed processing offers benefits like a 50% reduction in cost and higher rate limits. 

“Batch Mode is the perfect tool for any task where you have your data ready upfront and don’t need an immediate response,” Google wrote in a blog post.

AWS announces new features in SageMaker AI

SageMaker HyperPod—which allows scaling of genAI model development across thousands of accelerators—was updated with a new CLI and SDK. It also received a new observability dashboard that shows performance metrics, resource utilization, and cluster health, as well as the ability to deploy open-weight models from Amazon SageMaker JumpStart on SageMaker HyperPod. 

New remote connections were also added to SageMaker AI to allow it to be connected to from a local VS Code instance. 

Finally, SageMaker AI now has access to fully managed MLFlow 3.0, which provides a straightforward experience for tracking experiments, monitoring training progress, and gaining deeper insights into model behavior. 

Anthropic proposes transparency framework for frontier AI development

Anthropic is calling for the creation of an AI transparency framework that can be applied to large AI developers to ensure accountability and safety. 

“As models advance, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible development, a single catastrophic failure could halt progress for decades. Our proposed transparency framework offers a practical first step: public visibility into safety practices while preserving private sector agility to deliver AI’s transformative potential,” Anthropic wrote in a post. 

As such, it is proposing its framework in the hope that it could be applied at the federal, state, or international level. The initial version of the framework includes six core tenets to be followed, including restricting the framework to large AI developers only, requirements for system cards and documentation, and the flexibility to evolve as AI evolves.

Docker Compose gets new features for building and running agents

Docker has updated Compose with new features that will make it easier for developers to build, ship, and run AI agents. 

Developers can define open models, agents, and MCP-compatible tools in a compose.yaml file and then spin up an agentic stack with a single command: docker compose up.

Compose integrates with several agentic frameworks, including LangGraph, Embabel, Vercel AI SDK, Spring AI, CrewAI, Google’s ADK, and Agno.

Coder reimagines development environments to make them more ideal for AI agents

Coder is announcing the launch of its AI cloud development environments (CDEs), bringing together IDEs, dynamic policy governance, and agent orchestration into a single platform. 

According to Coder, current development infrastructure was built for humans, not agents, and agents have different requirements to be successful. “Agents need secure environments, granular permissions, fast boot times, and full toolchain access — all while maintaining governance and compliance,” the company wrote in an announcement. 

Coder’s new CDE attempts to solve this problem by introducing features designed for both humans and agents.

Some capabilities include fully isolated environments where AI agents and developers work alongside each other, a dual-firewall model to scope agent access, and an interface for running and managing AI agents.

DigitalOcean unifies AI offerings under GradientAI

GradientAI is an umbrella for all of the company’s AI offerings, and it is split into three categories: Infrastructure, Platform, and Application.

GradientAI Infrastructure features building blocks such as GPU Droplets, Bare Metal GPUs, vector databases, and optimized software for improving model performance; GradientAI Platform includes capabilities for building and monitoring agents, such as model integration, function calling, RAG, external data, and built-in evaluation tools; and GradientAI Applications includes prebuilt agents.

“If you’re already building with our AI tools, there’s nothing you need to change. All of your existing projects and APIs will continue to work as expected. What’s changing is how we bring it all together, with clearer organization, unified documentation, and a product experience that reflects the full potential of our AI platform,” DigitalOcean wrote in a blog post

Newest LF Decentralized Trust Lab HOPrS identifies if photos have been altered

OpenOrigins has announced that its Human-Oriented Proof System (HOPrS) has been accepted by the Linux Foundation’s Decentralized Trust as a new Lab. HOPrS is an open-source framework that can be used to figure out if an image has been altered.

It utilizes techniques like perceptual hashes and quadtree segmentation, combined with blockchain technology, to determine how images have been changed.

According to OpenOrigins, HOPrS can be used to identify if content is generated by AI, a capability becoming increasingly more important as it becomes more difficult to distinguish between AI-generated and human-generated content.

“The addition of HOPrS to the LF Decentralized Trust labs enables our community to access and collaborate on crucial tools for verifying content in the age of generative AI,” said Daniela Barbosa, executive director of LF Decentralized Trust.

Denodo announces DeepQuery

DeepQuery leverages governed enterprise data across multiple systems, departments, and formats to provide answers that are rooted in real-time information. It is currently available as a private preview. 

The company also announced its support for MCP, and the latest version of Denodo AI SDK includes an MCP Server implementation. 


Read last week’s updates here.

Share This Article
Leave a Comment