How Apollo Tyres is unlocking machine insights using agentic AI-powered Manufacturing Reasoner

This is a joint post co-authored with Harsh Vardhan, Global Head, Digital Innovation Hub, Apollo Tyres Ltd.

Apollo Tyres, headquartered in Gurgaon, India, is a prominent international tire manufacturer with production facilities in India and Europe. The company advertises its products under its two global brands: Apollo and Vredestein, and its products are available in over 100 countries through a vast network of branded, exclusive, and multiproduct outlets. The product portfolio of the company includes the entire range of passenger car, SUV, MUV, light truck, truck-bus, two-wheeler, agriculture, industrial, specialty, bicycle, and off-the-road tires and retreading materials.

Apollo Tyres has started an ambitious digital transformation journey to streamline its entire business value process, including manufacturing. The company collaborated with Amazon Web Services (AWS) to implement a centralized data lake using AWS services. Additionally, Apollo Tyres enhanced its capabilities by unlocking insights from the data lake using generative AI powered by Amazon Bedrock across business values.

In this pursuit, they developed Manufacturing Reasoner, powered by Amazon Bedrock Agents, a custom solution that automates multistep tasks by seamlessly connecting with the company’s systems, APIs, and data sources. The solution has been developed, deployed, piloted, and scaled out to identify areas to improve, standardize, and benchmark the cycle time beyond the total effective equipment performance (TEEP) and overall equipment effectiveness (OEE) of highly automated curing presses. The data flow of curing machines is connected to the AWS Cloud through the industrial Internet of Things (IoT), and machines are sending real-time sensor, process, operational, events, and condition monitoring data to the AWS Cloud.

In this post, we share how Apollo Tyres used generative AI with Amazon Bedrock to harness the insights from their machine data in a natural language interaction mode to gain a comprehensive view of its manufacturing processes, enabling data-driven decision-making and optimizing operational efficiency.

The challenge: Reducing dry cycle time for highly automated curing presses and improving operational efficiency

Before the Manufacturing Reasoner solution, plant engineers were conducting manual analysis to identify bottlenecks and focus areas using an industrial IoT descriptive dashboard for the dry cycle time (DCT) of curing presses across all machines, SKUs, cure mediums, suppliers, machine type, subelements, sub-subelements, and more. The analysis and identification of these focus areas across curing presses among millions of parameters on real-time operations used to consume from approximately 7 hours per issue to an average of 2 elapsed hours per issue. Additionally, subelemental level analysis (that is, bottleneck analysis of subelemental and sub-subelemental activities) wasn’t possible using traditional root cause analysis (RCA) tools. The analysis required subject matter experts (SMEs) from various departments such as manufacturing, technology, industrial engineering, and others to come together and perform RCA. As the insights were not generated in real time, corrective actions were delayed.

Solution impact

With the agentic AI Manufacturing Reasoner, the goal was to empower their plant engineers to perform corrective actions on accelerated RCA insights to reduce curing DCT. This agentic AI solution and virtual experts (agents) help plant engineers interact with industrial IoT connected to big data in natural language (English) to retrieve relevant insights and provide insightful recommendations for resolving operational issues in DCT processes. The RCA agent offers detailed insights and self-diagnosis or recommendations, identifying which of the over 25 automated subelements or activities should be focused on across more than 250 automated curing presses, more than 140 stock-keeping units (SKUs), three types of curing mediums, and two types of machine suppliers. The goal is to achieve the best possible reduction in DCT across three plants. Through this innovation, plant engineers now have a thorough understanding of their manufacturing bottlenecks. This comprehensive view supports data-driven decision-making and enhances operational efficiency. They realized an approximate 88% reduction in effort in assisting RCA for DCT through self-diagnosis of bottleneck areas on streaming and real-time data. The generative AI assistant reduces the DCT RCA from up to 7 hours per issue to less than 10 minutes per issue. Overall, the targeted benefit is expected to save approximately 15 million Indian rupees (INR) per year just in the passenger car radial (PCR) division across their three manufacturing plants.

This virtual reasoner also offers real-time triggers to highlight continuous anomalous shifts in DCT for mistake-proofing or error prevention in line with the Poka-yoke approach, leading to appropriate preventative actions. The following are additional benefits offered by the Manufacturing Reasoner:

  • Observability of elemental-wise cycle time along with graphs and statistical process control (SPC) charts, press-to-press direct comparison on the real-time streaming data
  • On-demand RCA on streaming data, along with daily alerts to manufacturing SMEs

“Imagine a world where business associates make real-time, data-driven decisions, and AI collaborates with humans. Our transformative generative AI solution is designed, developed, and deployed to make this vision a reality. This in-house Manufacturing Reasoner, powered by generative AI, is not about replacing human intelligence; it is about amplifying it.”

– Harsh Vardhan, Global Head, Digital Innovation Hub, Apollo Tyres Ltd.

Solution overview

By using Amazon Bedrock features, Apollo Tyres implemented an advanced auto-diagnosis Manufacturing Reasoner designed to streamline RCA and enhance decision-making. This tool uses a generative AI–based machine root cause reasoner that facilitated accurate analysis through natural language queries, provided predictive insights, and referenced a reliable Amazon Redshift database for actionable data. The system enabled proactive maintenance by predicting potential issues, optimizing cycle times, and reducing inefficiencies. Additionally, it supported staff with dynamic reporting and visualization capabilities, significantly improving overall productivity and operational efficiency.

The following diagram illustrates the multibranch workflow.

The following diagram illustrates the process flow.

To enable the workflow, Apollo Tyres followed these steps:

  1. Users ask their questions in natural language through the UI, which is a Chainlit application hosted on Amazon Elastic Compute Cloud (Amazon EC2).
  2. The question asked is picked up by the primary AI agent, which classifies the complexity of the question and decides which agent to be called for the multistep reasoning with help of different AWS services.
  3. Amazon Bedrock Agents uses Amazon Bedrock Knowledge Bases and the vector database capabilities of Amazon OpenSearch Service to extract relevant context for the request:
    1. Complex transformation engine agent – This agent works as an on-demand and complex transformation engine for the context and specific question.
    2. RCA agent – This agent for Amazon Bedrock constructs a multistep, multi–large language model (LLM) workflow to perform detailed automated RCA, which is particularly useful for complex diagnostic scenarios.
  4. The primary agent calls the explainer agent and visualization agent concurrently using multiple threads:
    1. Explainer agent – This agent for Amazon Bedrock uses Anthropic’s Claude Haiku model to generate explanations in two parts:
      1. Evidence – Provides a step-by-step logical explanation of the executed query or CTE.
      2. Conclusion – Offers a brief answer to the question, referencing Amazon Redshift records.
    2. Visualization agent – This agent for Amazon Bedrock generates Plotly chart code for creating visual charts using Anthropic’s Claude Sonnet model.
  5. The primary agent combines the outputs (records, explanation, chart code) from both agents and streams them to the application.
  6. The UI renders the result to the user by dynamically displaying the statistical plots and formatting the records in a table.
  7. Amazon Bedrock Guardrails helped setting up tailored filters and response limits, which made sure that interactions with machine data were not only secure but also relevant and compliant with established operational guidelines. The guardrails also helped to prevent errors and inaccuracies by automatically verifying the validity of information, which was essential for accurately identifying the root causes of manufacturing problems.

The following screenshot shows an example of the Manufacturing Reasoner response.

The following diagram shows an example of the Manufacturing Reasoner dynamic chart visualization.

“As we integrate this generative AI solution, built on Amazon Bedrock, to automate RCA into our plant curing machines, we’ve seen a profound transformation in how we diagnose issues and optimize operations,” says Vardhan. “The precision of generative AI–driven insights has enabled plant engineers to not only accelerate problem finding from an average of 2 hours per scenario to less than 10 minutes now but also refine focus areas to make improvements in cycle time (beyond TEEP). Real-time alerts notify process SMEs to act on bottlenecks immediately and advanced diagnosis features of the solution provide subelement-level information about what’s causing deviations.”

Lessons learned

Apollo Tyres learned the following takeaways from this journey:

  • Applying generative AI to streaming real-time industrial IoT data requires extensive research due to the unique nature of each use case. To develop an effective manufacturing reasoner for automated RCA scenarios, Apollo Tyres explored several strategies from the prototype to the proof-of-concept stages.
  • In the beginning, the solution faced significant delays in response times when using Amazon Bedrock, particularly when multiple agents were involved. The initial response times exceeded 1 minute for data retrieval and processing by all three agents. To address this issue, efforts were made to optimize performance. By carefully selecting appropriate LLMs and small language models (SLMs) and disabling unused workflows within the agent, the response time was successfully reduced to approximately 30–40 seconds. These optimizations played a crucial role in boosting the solution’s efficiency and responsiveness, leading to smoother operations and an enhanced user experience across the system.
  • While using the capabilities of LLMs to generate code for visualizing data through charts, Apollo Tyres faced challenges when dealing with extensive datasets. Initially, the generated code often contained inaccuracies or failed to handle large volumes of data correctly. To address this issue, they embarked on a process of continuous refinement, iterating multiple times to enhance the code generation process. Their efforts focused on developing a dynamic approach that could accurately generate chart code capable of efficiently managing data within a data frame, regardless of the number of records involved. Through this iterative approach, they significantly improved the reliability and robustness of the chart generation process, making sure that it could handle substantial datasets without compromising accuracy or performance.
  • Consistency issues were effectively resolved by making sure the correct data format is ingested into the Amazon data lake for the knowledge base, structured as follows:
{
"Question": <Question in natural language>, 
"Query": < Complex Transformation Engine scripts >, 
“Metadata” :<metadata>
}

Next steps

The Apollo Tyres team is scaling the successful solution from tire curing to various areas across different locations, advancing towards the industry 5.0 goal. To achieve this, Amazon Bedrock will play a pivotal role in extending the multi-agentic Retrieval Augmented Generation (RAG) solution. This expansion involves using specialized agents, each dedicated to specific functionalities. By implementing agents with distinct roles, the team aims to enhance the solution’s capabilities across diverse operational domains.

Furthermore, the team is focused on benchmarking and optimizing the time required to deliver accurate responses to queries. This ongoing effort will streamline the process, providing faster and more efficient decision-making and problem-solving capabilities across the extended solution.Apollo Tyres is also exploring generative AI using Amazon Bedrock for its other manufacturing and nonmanufacturing processes.

Conclusion

In summary, Apollo Tyres used generative AI through Amazon Bedrock and Amazon Bedrock Agents to transform raw machine data into actionable insights, achieving a holistic view of their manufacturing operations. This enabled more informed, data-driven decision-making and enhanced operational efficiency. By integrating generative AI–based manufacturing reasoners and RCA agents, they developed a machine cycle time diagnosis assistant capable of pinpointing focus areas across more than 25 subprocesses, more than 250 automated curing presses, more than 140 SKUs, three curing mediums, and two machine suppliers. This solution helped drive targeted improvements in DCT across three plants, with targeted annualized savings of approximately INR 15 million within the PCR segment alone and achieving an approximate 88% reduction in manual effort for root cause analysis.

“By embracing this agentic AI-driven approach, Apollo Tyres is redefining operational excellence—unlocking hidden capacity through advanced ‘asset sweating’ while enabling our plant engineers to communicate with machines in natural language. These bold, in-house AI initiatives are not just optimizing today’s performance but actively building the firm foundation for intelligent factories of the future driven by data and human-machine collaboration.”

– Harsh Vardhan.

To learn more about Amazon Bedrock and getting started, refer to Getting started with Amazon Bedrock. If you have feedback about this post, leave a comment in the comments section.


About the authors

Harsh Vardhan is a distinguished global leader in Business-first AI-first Digital Transformation with over two- decades of industry experience. As the Global Head of the Digital Innovation Hub at Apollo Tyres Limited, he leads industrialisation of AI-led Digital Manufacturing, Industry 4.0/5.0 excellence, and fostering enterprise-wide AI-first innovation culture. He is A+ contributor in field of Advanced AI with Arctic code vault badge, Strategic Intelligence member at World Economic Forum, and executive member of CII National Committee. He is an avid reader and loves to drive.

Gautam Kumar is a Solutions Architect at Amazon Web Services. He helps various Enterprise customers to design and architect innovative solutions on AWS. Outside work, he enjoys travelling and spending time with family.

Deepak Dixit is a Solutions Architect at Amazon Web Services, specializing in Generative AI and cloud solutions. He helps enterprises architect scalable AI/ML workloads, implement Large Language Models (LLMs), and optimize cloud-native applications.

Share:

More Posts

Send Us A Message