New technology brings with it more career opportunities. You may never have imagined becoming an LLMOps consultant, but there's now a Coursera Specialization which provides preparation for this role.
Like other Coursera Specializations, it is available through Coursera Plus which currently has a promotion offering $100 off an annual subscription.
The newly emerged field of LLMOps, short for Large Language Model Operations, is concerned with managing the entire lifecycle of large language models (LLMs), including everything from fine-tuning the model for a specific task to deploying it into production and then monitoring its performance over time.
Duke University, which already has Data Science and AI Specializations on Coursera has added one onLarge Language Model Operations which aims to prepare learners for roles such as Machine Learning Engineer, DevOps Engineer, Cloud Architect, AI Infrastructure Specialist, or LLMOps Consultant. Comprising six self-paced courses and expected to take 5 months at 10 hours per week, this program invites you to:
Dive into topics ranging from generative AI techniques to open source LLM management across various platforms such as Azure, AWS, Databricks, local infrastructure, and beyond. Through immersive projects and best practices, gain hands-on experience in designing, deploying, and scaling powerful language models tailored for diverse applications.
The program includes over 20 hands-on coding projects like deploying large language models on Azure and AWS clouds or services such as Databricks, utilizing the Azure AI Service for building applications, creating powerful prompts with LLM frameworks, running local LLM models using external APIs and cloud services, and constructing a chatbot based on personal data with vector databases. By competing these learners will acquire authentic, portfolio-ready experience in deploying, managing, and optimizing large language models.
The details of the courses are as follows. All of them can be audited for free, but if you want certificates for completion of each course and count them towards the Specialization, then you need to upgrade to the paid-for track which gives full access to materials and to the graded exercises. On Coursera Plus you can enroll on as many courses as you want.
Introduction to Generative AI - Beginner - 37 hours Learn what generative AI is and how it has evolved from early AI to the large language models used today. Understand how these models work in applications by learning about model architectures and the training process. Gain an overview of major foundation models like ChatGPT and Hugging Face, highlighting their capabilities and limitations. Explore the generative AI landscape, comparing options like open source models, local models, and cloud APIs. By the end, you'll have a solid base of knowledge about the foundations of this technology and options for accessing and leveraging different AI systems.
Operationalizing LLMs on Azure - Beginner/Intermediate - 10 hours Delve into Azure's AI services and the Azure portal, gaining insights into large language models, their functionalities, and strategies for risk mitigation. Practical applications include leveraging Azure Machine Learning, managing GPU quotas, deploying models, and utilizing the Azure OpenAI Service. As you progress, the course explores nuanced query crafting, Semantic Kernel implementation, and advanced strategies for optimizing interactions with LLMs within the Azure environment. The final module focuses on architectural patterns, deployment strategies, and hands-on application building using RAG, Azure services, and GitHub Actions workflows.
Gain practical expertise in scaling data engineering systems using cutting-edge tools and techniques. Throughout the course, you'll master the application of technologies such as Celery with RabbitMQ for scalable data consumption, Apache Airflow for optimized workflow management, and Vector and Graph databases for robust data management at scale.
GenAI and LLMs on AWS - Beginner - 45 hours Discover how to deploy and manage large language models (LLMs) in production using AWS services like Amazon Bedrock. By the end of the course, learners will know how to: Choose the right LLM architecture and model for your application using services Optimize cost, performance and scalability of LLMs on AWS using auto-scaling groups, spot instances and container orchestration Monitor and log metrics from your LLM to detect issues and continuously improve quality Build reliable and secure pipelines to train, deploy and update models using AWS services Comply with regulations when deploying LLMs in production through techniques like differential privacy and controlled rollouts
Databricks to Local LLMs- Beginner - 27 hours By the end of this course, a learner will master Databricks to perform data engineering and data analytics tasks for data science workflows. Additionally, a student will learn to master running local large language models like Mixtral via Hugging Face Candle and Mozilla llamafile.
Open Source LLMOps Solutions - Beginner - 38 hours Learn the fundamentals of large language models (LLMs) and put them into practice by deploying your own solutions based on open source models. By the end of this course, you will be able to leverage state-of-the-art open source LLMs to create AI applications using a code-first approach. The highlight of this course is a guided project where you will fine-tune a model like LLaMA or Mistral on a dataset of your choice. You will use SkyPilot to easily scale model training on low-cost spot instances across cloud providers. Finally, you will containerize your model for efficient deployment using model servers like LoRAX and vLLM. By the end of the course, you will have first-hand experience leveraging open source LLMs to build AI solutions.
The skills to be gained by completing this specialization are valuable for a career in data science, AI and Machine Learning, all at the cutting edge of today's rapidly emerging technology.
There's an updated version of Uno that includes enhancements to its Hot Reload UI feature along with full support for JetBrains Rider IDE. Uno can be used to write C# and XAML once to create an app th [ ... ]