This post was co-written with Julio P. Roque Hexagon ALI.

Recognizing the transformative benefits of generative AI for enterprises, we at Hexagon’s Asset Lifecycle Intelligence division sought to enhance how users interact with our Enterprise Asset Management (EAM) products. Understanding these advantages, we partnered with AWS to embark on a journey to develop HxGN Alix, an AI-powered digital worker using AWS generative AI services. This blog post explores the strategy, development, and implementation of HxGN Alix, demonstrating how a tailored AI solution can drive efficiency and enhance user satisfaction.

Forming a generative AI strategy: Security, accuracy, and sustainability

Our journey to build HxGN Alix was guided by a strategic approach focused on customer needs, business requirements, and technological considerations. In this section, we describe the key components of our strategy.

Understanding consumer generative AI and enterprise generative AI

Generative AI serves diverse purposes, with consumer and enterprise applications differing in scope and focus. Consumer generative AI tools are designed for broad accessibility, enabling users to perform everyday tasks such as drafting content, generating images, or answering general inquiries. In contrast, enterprise generative AI is tailored to address specific business challenges, including scalability, security, and seamless integration with existing workflows. These systems often integrate with enterprise infrastructures, prioritize data privacy, and use proprietary datasets to provide relevance and accuracy. This customization allows businesses to optimize operations, enhance decision-making, and maintain control over their intellectual property.

Commercial compared to open source LLMs

We used multiple evaluation criteria, as illustrated in the following figure, to determine whether to use a commercial or open source large language model (LLM).

LLM evaluation

The evaluation criteria are as follows:

The path to the enterprise generative AI: Crawl, walk, run

By adopting a phased approach (as shown in the following figure), we were able to manage development effectively. Because the technology is new, it was paramount to carefully build the right foundation for adoption of generative AI across different business units.

The phases of the approach are:

Identifying the right use case: Digital worker

A critical part of our strategy was identifying a use case that would offer the best return on investment (ROI), depicted in the following figure. We pinpointed the development of a digital worker as an optimal use case because of its potential to:

By focusing on a digital worker, we aimed to deliver significant value to both internal teams and end-users.

Introducing Alix: A digital worker for asset lifecycle intelligence

HxGN Alix is our AI-powered chat assistant designed to act as a digital worker to revolutionize user interaction with EAM products. Developed to operate securely within high-security environments, HxGN Alix serves multiple functions:

By delivering a tailored, AI-driven approach, HxGN Alix addresses specific challenges faced by our clients, transforming the user experience while upholding stringent security standards.

Understanding system needs to guide technology selection

Before selecting the appropriate technology stack for HxGN Alix, we first identified the high-level system components and expectations of our AI assistant infrastructure. Through this process, we made sure that we understood the core components required to build a robust and scalable solution. The following figure illustrates the core components that we identified.

AI assistant Infrastructure

The non-functional requirements are:

Choosing the right technology stack

To develop HxGN Alix, we selected a combination of AWS generative AI services and complementary technologies, focusing on scalability, customization, and security. We finalized the following architecture to serve our technical needs.

The AWS services include:

The development journey

We embarked on the development of HxGN Alix through a structured, phased approach.

The proof of concept

We initiated the project by creating a proof of concept to validate the feasibility of an AI assistant tailored for secure environments. Although the industry has seen various AI assistants, the primary goal of the proof of concept was to make sure that we could develop a solution while adhering to our high security standards, which required full control over the manageability of the solution.

During the proof of concept, we scoped the project to use an off-the-shelf NeMo model deployed on our existing EKS cluster without integrating internal knowledge bases. This approach helped us verify the ability to integrate the solution with existing products, control costs, provide scalability, and maintain security—minimizing the risk of late-stage discoveries.

After releasing the proof of concept to a small set of internal users, we identified a healthy backlog of work items that needed to go live, including enhancements in security, architectural improvements, network topology adjustments, prompt management, and product integration.

Security enhancements

To adhere to the stringent security requirements of our customers, we used the secure infrastructure provided by AWS. With models deployed in our existing production EKS environment, we were able to use existing tooling for security and monitoring. Additionally, we used isolated private subnets to make sure that code interacting with models wasn’t connected to the internet, further enhancing information protection for users.

Because user interactions are in free-text format and users might input content including personally identifiable information (PII), it was critical not to store any user interactions in any format. This approach provided complete confidentiality of AI use, adhering to strict data privacy standards.

Adjusting response accuracy

During the proof of concept, it became clear that integrating the digital worker with our products was essential. Base models had limited knowledge of our products and often produced hallucinations. We had to choose between pretraining the model with internal documentation or implementing RAG. RAG became the obvious choice for the following reasons:

Implementing a RAG system presented its own challenges and required experimentation. Key challenges are depicted in the following figure.

These challenges include:

We tested two approaches:

The first method worked better with large document sets by focusing on highly relevant results, whereas the second approach was more effective with a smaller, focused document set. Both methods have their pros and cons, and results vary based on the nature of the documents.

To address these challenges, we developed a pipeline of steps to receive accurate responses from our digital assistant.

The following figure summarizes our RAG implementation journey.

Adjusting the application development lifecycle

For generative AI systems, the traditional application development lifecycle requires adjustments. New processes are necessary to manage accuracy and system performance:

Conclusion

The successful launch of HxGN Alix demonstrates the transformative potential of generative AI in enterprise asset management. By using AWS generative AI services and a carefully selected technology stack, we optimized internal workflows and elevated user satisfaction within secure environments. HxGN Alix exemplifies how a strategically designed AI solution can drive efficiency, enhance user experience, and meet the unique security needs of enterprise clients.

Our journey underscores the importance of a strategic approach to generative AI—balancing security, accuracy, and sustainability—while focusing on the right use case and technology stack. The success of HxGN Alix serves as a model for organizations seeking to use AI to solve complex information access challenges.

By using the right technology stack and strategic approach, you can unlock new efficiencies, improve user experience, and drive business success. Connect with AWS to learn more about how AI-driven solutions can transform your operations.


About the Authors

Julio P. Roque is an accomplished Cloud and Digital Transformation Executive and an expert at using technology to maximize shareholder value. He is a strategic leader who drives collaboration, alignment, and cohesiveness across teams and organizations worldwide. He is multilingual, with an expert command of English and Spanish, understanding of Portuguese, and cultural fluency of Japanese.

Manu Mishra is a Senior Solutions Architect at AWS, specializing in artificial intelligence, data and analytics, and security. His expertise spans strategic oversight and hands-on technical leadership, where he reviews and guides the work of both internal and external customers. Manu collaborates with AWS customers to shape technical strategies that drive impactful business outcomes, providing alignment between technology and organizational goals.

Veda Raman is a Senior Specialist Solutions Architect for generative AI and machine learning at AWS. Veda works with customers to help them architect efficient, secure, and scalable machine learning applications. Veda specializes in generative AI services like Amazon Bedrock and Amazon SageMaker.