aws bedrock
generative AI

Unlocking the Power of AI for Enterprises with Amazon Bedrock



In today’s fast-paced business landscape, harnessing the power of AI is no longer a luxury but a necessity. Whether it’s boosting productivity within teams or enhancing customer service through intelligent chatbots, AI has become a game-changer for enterprises (and practically, for any organization) worldwide. In this blog post, we’ll dive into the features offered by Bedrock, a robust AI platform provided by the industry-leading public cloud provider, Amazon Web Services.

We will start by exploring Bedrock’s features as part of its core functionalities. Starting with the different foundational models, from various companies specialized in AI systems, we’ll then discover how Agents can help complete complex business tasks.

But Bedrock doesn’t stop at pre-packaged solutions: we’ll also examine the exciting possibility of fine-tuning the foundational models to align perfectly with business demands. This will ensure that companies meet their goals and achieve optimum results.

Next, we will investigate service compliance, data residency, and privacy solutions, where we can see how Bedrock is designed with the demands of the modern enterprise in mind.

Furthermore, we will delve into the audit logging mechanisms Bedrock can be integrated with. These mechanisms will help businesses uphold transparency, accountability, and security in their AI deployments. Additionally, we will see the pricing options and how companies can efficiently track costs while enjoying the benefits of AI. This makes AI not only a powerful tool but also a cost-effective one.

Core functionalities

Amazon Bedrock is a fully managed service that offers customers a choice of several high-performing foundation models. By providing off-the-shelf, pre-trained models, enterprises are relieved of the need to build complex AI clusters and acquire specialized expertise. This not only helps organizations save valuable time but also enables them to focus on their core competencies, leading to increased productivity and profitability. With Amazon Bedrock, enterprises can leverage state-of-the-art AI models and technologies to achieve their business objectives cost-effectively and efficiently. Additionally, Amazon Bedrock provides the option to customize foundation models with your data in a private manner. This can be done even through a simple visual interface, without the need for any coding. You can simply choose the training and validation data sets that are stored in S3 and, if necessary, adjust the parameters to achieve optimal model performance.

AI21 Labs, a Tel Aviv-based company specializing in Natural Language Processing (NLP), has developed AI systems that can understand and generate natural language. Two of their models are Jurassic-2: Mid and Jurassic-2: Ultra. Mid is designed for tasks such as question answering, summarization, long-form copy generation, advanced information extraction, and many others. On the other hand, Ultra is their most powerful model currently and can be applied to complex tasks that require advanced text generation and comprehension.

Amazon has been focusing on Artificial Intelligence and Machine Learning for over 20 years and developed different Titan models.

Titan Embeddings is a Language Model that converts textual inputs (words, phrases, or even long texts) into numerical representations known as embeddings, which capture the semantic meaning of the text. While this model does not generate text, it is extremely useful for applications such as personalization and search. By comparing these embeddings, the model can produce responses that are more relevant and contextual than those based solely on word matching.

Titan Text offers two language models to cater to different needs.

Titan Text Express is a powerful and affordable option that supports over 100 languages. It is suitable for a range of use cases, including retrieval augmented generation, open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, chain of thought, rewrite, extraction, Q&A, and chat.

On the other hand, Titan Text Lite is a more compact option that is ideal for basic tasks and fine-tuning but supports only English. Its use cases are like Titan Text Express and include open-ended text generation, brainstorming, summarization, code generation, table creation, data formatting, paraphrasing, chain of thought, rewrite, extraction, Q&A, and chat.

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. They have developed three models: Claude instant v1.2, Claude v1.3, and Claude v2. Claude instant v1.2 is a faster and cheaper model that can handle a range of tasks including casual dialogue, text analysis, summarization, and document question-answering. Claude v1.3 is an earlier version of Anthropic’s general-purpose large language models. Finally, Claude v2 is Anthropic’s most powerful model, which excels at a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction following.

Cohere is an AI platform designed specifically for enterprise use. It creates large language models (LLMs) and LLM-powered solutions that enable computers to search, understand meaning, and converse in text. The models are tailored to meet the unique needs of businesses, providing ease of use, strong security, and privacy controls across multiple deployment options. One of the text generation models from Cohere’s platform is called Command. It’s specially trained on data that supports reliable business applications, such as text generation, summarization, copywriting, dialogue, extraction, and question-answering.

Stability AI is an open-source company that develops generative artificial intelligence in collaboration with public and private sector partners. The company aims to bring next-generation infrastructure to a global audience. Their latest product, SDXL, is an advanced image generator that produces more detailed imagery than its predecessor. Additionally, it offers functionalities like image-to-image prompting, inpainting, and outpainting.

Bedrock has an exciting feature called Agents, which is fully managed by AWS and can dynamically invoke APIs to carry out complex business tasks. These agents can be programmed to perform a wide range of tasks such as booking plane tickets, processing complaints, preparing tax filings, or even managing an e-commerce site’s inventory. By leveraging Amazon Bedrock’s fully managed agents, businesses can extend their reasoning capabilities to break down tasks, create an orchestration plan, and execute it seamlessly.

Compliance and Data Privacy

Amazon Bedrock is compliant with GDPR and HIPAA standards, making it a reliable choice for generative AI tasks. Your data is kept private and is not used to enhance the base models, nor is it shared with the third-party model providers mentioned before. Furthermore, all your data is encrypted at rest using your own AWS Key Management Service (AWS KMS) keys, allowing you full control over how your data and custom models are being stored and accessed. Optionally, using PrivateLink ensures that your data is transmitted to Amazon Bedrock solely through AWS and not through the public internet.

Every model provider has an escrow account where they upload their models. The Amazon Bedrock inference account has the necessary permissions to call these models. However, the escrow accounts themselves don’t have outbound permissions to Amazon Bedrock accounts. Furthermore, model providers are not granted access to Amazon Bedrock logs or customer prompts and continuations.

When you customize a model, Amazon Bedrock fine-tunes it for a particular task without having to annotate large volumes of data. A separate copy of the base foundation model is created exclusively for you, which is not accessible to anyone else. This private copy of the model is then trained using your data, without any of your content being used to train the original base models. You can configure your Amazon VPC settings to access Amazon Bedrock APIs securely.

Amazon Bedrock ensures the protection of your natural language data by redacting it from its service logs. None of the data you provide for fine-tuning is stored in Amazon Bedrock accounts. Once the training and evaluation data is used to fine-tune a custom model, it remains only in your AWS account.

During the training process, your data exists in Amazon SageMaker instance memory, but it is encrypted on these machines using an XTS-AES-256 cipher that is implemented on a hardware module, on the physical instance. Once the fine-tuning of a custom model is complete, the model weight obtained by training is stored, but none of your training data is preserved.

The metadata of a custom model, such as its name and Amazon Resource Name, and a provisioned model’s metadata, is stored in an Amazon DynamoDB table that is encrypted with a key that the Amazon Bedrock service owns.

All inter-network data in transit within AWS can be encrypted with TLS 1.2. When you request access to the Amazon Bedrock API and console, it is done through a secure (SSL) connection. To authorize access to resources for training and deployment, you pass AWS Identity and Access Management (IAM) roles to Amazon Bedrock.

Governance and Auditability

Amazon provides a range of monitoring and logging services to support governance and auditability. Bedrock is designed to integrate seamlessly with one such tool, AWS CloudTrail, which keeps track of actions taken by users, roles, or services. CloudTrail captures all API calls as events, including calls from the Amazon Bedrock console and code calls to the Amazon Bedrock API operations. By creating a trail, you can enable the continuous delivery of CloudTrail events to an Amazon S3 bucket.

These log files consist of one or more log entries which represent a single request from any source. Each log entry contains information such as the requested action, date and time of the action, request parameters and more.

In Enterprise-grade use-cases it is recommended to use multi-account environments in AWS. In such a setup CloudTrail will send log files from multiple AWS workload accounts to a single logging accounts S3 bucket to prevent unauthorized access and ensure security. This can help with operational and risk auditing, as well as preventing potential malicious activity.

To prevent potential misuse, Amazon Bedrock implements automated abuse detection mechanisms. You can use AWS PrivateLink with Amazon Bedrock to establish private connectivity between your foundation models and on-premises networks without exposing your traffic to the internet.

Shared Responsibility Model: Security is a shared responsibility between AWS and you. AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. Your responsibility is determined by the AWS service that you use.


On-Demand mode allows you to pay only for what you use without any time-based term commitments. When using text generation models, you will be charged for each input token processed and for each output token generated. For embeddings models, you will be charged for each input token processed. A token is a basic unit, consisting of a few characters that a model learns to understand user input and prompt to generate results. When using image generation models, you will be charged for every image generated.

In Provisioned Throughput mode, you have the option to purchase model units for a specific base or custom model. This mode is primarily intended for consistent and large inference workloads that require guaranteed throughput. It is important to note that custom models can only be accessed using Provisioned Throughput. Each model unit provides a certain throughput, which is measured by the maximum number of input or output tokens processed per minute. With Provisioned Throughput pricing, which is charged per hour, you have the flexibility to choose between 1-month or 6-month commitment terms.

To keep track of your Bedrock service usage related costs in AWS, you can use AWS Cost Explorer. This tool allows you to easily visualize, understand, and manage your AWS costs and usage over time. With AWS Cost Explorer, you can create custom reports that analyze cost and usage data, and explore your data using filtering and grouping. Additionally, you can create a cost and usage forecast for a future time range for your report, and save and share custom reports to explore different sets of data.

Wrap up

Year 2024 will be about having generative AI widely adopted across different industries, but this is just the start looking at the pace of innovation in this area. Get started with your gen AI journey in a secure and cost-effective way in AWS, to harness the innovative advantages of LLMs. Partner with TC2, so we can share best practices and real-life experiences on security by design, cost effective architecture patterns and robust operating model in the gen AI era.