Artificial Intelligence

Category: Amazon SageMaker AI

Choosing the right approach for generative AI-powered structured data retrieval

In this post, we explore five different patterns for implementing LLM-powered structured data query capabilities in AWS, including direct conversational interfaces, BI tool enhancements, and custom text-to-SQL solutions.

Build and deploy AI inference workflows with new enhancements to the Amazon SageMaker Python SDK

In this post, we provide an overview of the user experience, detailing how to set up and deploy these workflows with multiple models using the SageMaker Python SDK. We walk through examples of building complex inference workflows, deploying them to SageMaker endpoints, and invoking them for real-time inference.

Time series plot of spacecraft velocity data in ECEF coordinates, showing three velocity components in blue, green, and yellow. Red markers indicate detected anomalies, with a purple dashed line representing the anomaly score throughout the time series.

Using Amazon SageMaker AI Random Cut Forest for NASA’s Blue Origin spacecraft sensor data

In this post, we demonstrate how to use SageMaker AI to apply the Random Cut Forest (RCF) algorithm to detect anomalies in spacecraft position, velocity, and quaternion orientation data from NASA and Blue Origin’s demonstration of lunar Deorbit, Descent, and Landing Sensors (BODDL-TP).

Power Your LLM Training and Evaluation with the New SageMaker AI Generative AI Tools

Today we are excited to introduce the Text Ranking and Question and Answer UI templates to SageMaker AI customers. In this blog post, we’ll walk you through how to set up these templates in SageMaker to create high-quality datasets for training your large language models.

Build a scalable AI video generator using Amazon SageMaker AI and CogVideoX

In recent years, the rapid advancement of artificial intelligence and machine learning (AI/ML) technologies has revolutionized various aspects of digital content creation. One particularly exciting development is the emergence of video generation capabilities, which offer unprecedented opportunities for companies across diverse industries. This technology allows for the creation of short video clips that can be […]

Accelerate foundation model training and inference with Amazon SageMaker HyperPod and Amazon SageMaker Studio

In this post, we discuss how SageMaker HyperPod and SageMaker Studio can improve and speed up the development experience of data scientists by using IDEs and tooling of SageMaker Studio and the scalability and resiliency of SageMaker HyperPod with Amazon EKS. The solution simplifies the setup for the system administrator of the centralized system by using the governance and security capabilities offered by the AWS services.

Solution workflow

Implement semantic video search using open source large vision models on Amazon SageMaker and Amazon OpenSearch Serverless

In this post, we demonstrate how to use large vision models (LVMs) for semantic video search using natural language and image queries. We introduce some use case-specific methods, such as temporal frame smoothing and clustering, to enhance the video search performance. Furthermore, we demonstrate the end-to-end functionality of this approach by using both asynchronous and real-time hosting options on Amazon SageMaker AI to perform video, image, and text processing using publicly available LVMs on the Hugging Face Model Hub. Finally, we use Amazon OpenSearch Serverless with its vector engine for low-latency semantic video search.

Modernize and migrate on-premises fraud detection machine learning workflows to Amazon SageMaker

Radial is the largest 3PL fulfillment provider, also offering integrated payment, fraud detection, and omnichannel solutions to mid-market and enterprise brands. In this post, we share how Radial optimized the cost and performance of their fraud detection machine learning (ML) applications by modernizing their ML workflow using Amazon SageMaker.

SageMaker PyTorch containers

Run small language models cost-efficiently with AWS Graviton and Amazon SageMaker AI

In this post, we demonstrate how to deploy a small language model on SageMaker AI by extending our pre-built containers to be compatible with AWS Graviton instances. We first provide an overview of the solution, and then provide detailed implementation steps to help you get started. You can find the example notebook in the GitHub repo.