Recently, Microsoft announced new features in Azure AI Services at the Ignite 2023 event. As a Microsoft partner, we are following these updates closely and exploring how the new features will unlock more value for enterprises.
The slew of updates is more focused on empowering businesses with enterprise-grade generative AI applications – from leveraging cutting-edge foundation models to building AI applications to enhancing user experiences on those AI applications.
Developing generative AI applications that work exclusively on your enterprise data, for your enterprise is a complex process. You need powerful GPUs to finetune large language models (LLMs). Making this process easier for enterprises, Microsoft launched Model as a Service (MaaS) in the Azure AI model catalog. Using MaaS, you can finetune LLMs and build generative AI applications using inference APIs. You will be charged for the number of tokens used as part of the pay-as-you-go (PayGo) model. So, you don’t have to worry about the cost and complexities of maintaining GPU infrastructure.
New AI capabilities in Azure AI Services
While MaaS helps you access large language models, building generative AI applications could still be daunting. To simplify generative AI application development, Microsoft launched a state-of-the-art platform called Azure AI Studio. The platform uses cutting-edge models and services such as OpenAI’s GPT4 to enable you to build custom copilots and generative AI solutions.
Generative AI applications help you create seemingly authentic content and explore your enterprise data. To help you deliver highly accurate search experiences on your generative AI applications, Microsoft launched Vector search and Semantic ranker on Azure AI Search. Vector search helps you swiftly search on extensive enterprise datasets and Semantic ranker improves the quality and relevance of search results. If you want to have longer conversations with your generative AI application, you can leverage GPT3.5 Turbo that supports 16K token prompt length. Microsoft also announced the public preview of GPT4 Turbo that offers more control and efficiency for your generative AI applications at a lower price.
Typically, generative AI applications take text inputs and generate text outputs. To impart visual understanding capabilities to your generative AI applications, Microsoft launched GPT4 Turbo with Vision. This LLM model recognizes pictures and objects in pictures and generates text responses. So, your generative AI application can now read and understand handwritten shopping lists and add them to your digital carts. Expanding this capability to video content, you can use GPT4 Turbo with Vision with Azure AI Vision to create generative AI applications that can understand video content and generate text outputs.
Apparently, these features make Azure one of the best platforms to leverage generative AI technology for your business.
14 Questions to ask before embracing the AI capabilities in Azure AI Services
I can say with confidence that now is the right time to embrace or accelerate Azure cloud strategies. You may want to revamp your existing strategies to take advantage of the latest AI capabilities. As this can be a big leap for your business, you should also make a decisive move at this point in your cloud journey.
Before redrafting your Azure adoption strategies, you need to carefully consider seven aspects – strategic alignment, operational impact, data integration and utilization, decision-making and model selection, responsible AI implementation, adoption and integration, and scaling AI initiatives. You need to ask a couple of questions at every phase of the Azure adoption journey to identify the must-haves and nice-to-haves and make the right decisions.
Strategic Alignment
- How can these Azure Machine Learning enhancements align with our company’s strategic goals for the coming year?
- Which specific business areas or processes could benefit most from these AI advancements?
Operational Impact
- How might prompt flow and model catalog streamline our AI application development lifecycle?
- What operational changes or optimizations can we foresee by integrating Model-as-a-Service into our applications?
Data Integration and Utilization
- How can we leverage OneLake integration to enhance collaboration between data engineers and machine learning professionals?
- What opportunities exist to maximize the use of OneLake data assets within Azure Machine Learning for model training and predictions?
Decision-making and Model Selection
- How can the model catalog’s benchmarking capabilities help us make informed decisions about the selection of foundation models for our use cases?
- Which specific models within the catalog align best with our business requirements and goals?
Responsible AI Implementation
- What steps can we take to ensure the responsible and ethical use of AI solutions, especially considering the updates on evaluating and designing responsible AI systems?
- How can we monitor and maintain the safety and quality of AI applications in production?
Adoption and Integration
- What strategies should be implemented to encourage adoption and integration of Azure AI Studio across different departments or teams within the organization?
- How can we maximize the tools and models available in Azure AI Studio to drive innovation within our teams?
Scaling AI Initiatives
- In what ways can we scale proof of concepts developed in Azure AI Studio into full production with continuous monitoring and refinement?
- How can we ensure scalability while maintaining the quality and integrity of AI solutions as we expand their use across the organization?
Answering these questions will help you understand how you can leverage the latest AI capabilities in Azure AI Services and create a concrete roadmap.
Need help in finding answers to these questions?
We’re here to help you.
As a trusted Microsoft partner for more than 16 years, we at Saxon AI help enterprises of all sizes realize successful cloud adoption strategies. Get in touch with us now.