TWIL: September 24, 2023

In these past two weeks, I’ve been mostly looking into Azure Machine Learning and Prompt Flow. Also, I’m highlighting two episodes of Lex Fridman’s podcast, a set of articles on Microsoft Fabric and the just-announced Microsoft Copilot! Epic stuff. Finally, an academic paper on Chain-of-Verification method to reduce hallucination in LLMs. Have fun!


Podcasts

Lex Fridman Podcast

Episode 395: Walter Isaacson: Elon Musk, Steve Jobs, Einstein, Da Vinci & Ben Franklin
Lex Fridman interviews Walter Isaacson, a writer of biographies on Elon Musk, Steve Jobs, Einstein, Benjamin Franklin, Leonardo da Vinci, and many others. The conversation includes some practical advice from Isaacson on how to write, how to manage time, how to hire and fire people, how to work in groups or as individuals, how to deal with mortality, and how to love and maintain relationships. He also offers some guidance for young people who want to pursue their passions and dreams.

Episode 396: James Sexton: Divorce Lawyer on Marriage, Relationships, Sex, Lies & Love
Lex Fridman interviews James Sexton, a divorce attorney and author, about various topics related to marriage, sex, love, cheating, and divorce. James Sexton shares his perspective on why marriages fail, how to prevent or deal with breakups, what are the common causes and signs of cheating, how to handle complicated divorce cases, and what are the benefits and drawbacks of prenups, open marriages, and threesomes.


Prompt Flow

What is Azure Machine Learning prompt flow
Azure Machine Learning prompt flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). As the momentum for LLM-based AI applications continues to grow across the globe, Azure Machine Learning prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.

Connections in Prompt flow
Connections in Prompt flow play a crucial role in establishing connections to remote APIs or data sources. They encapsulate essential information such as endpoints and secrets, ensuring secure and reliable communication. In the Azure Machine Learning workspace, connections can be configured to be shared across the entire workspace or limited to the creator. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards.

Runtimes in Prompt flow
In prompt flow, runtimes serve as computing resources that enable customers to execute their flows seamlessly. A runtime is equipped with a prebuilt Docker image that includes our built-in tools, ensuring that all necessary tools are readily available for execution.

Flows in Prompt flow
In Azure Machine Learning prompt flow, users have the capability to develop a LLM-based AI application by engaging in the stages of developing, testing, tuning, and deploying a flow. This comprehensive workflow allows users to harness the power of Large Language Models (LLMs) and create sophisticated AI applications with ease.

Tools in Prompt flow
Tools are the fundamental building blocks of a flow in Azure Machine Learning prompt flow. Each tool is a simple, executable unit with a specific function, allowing users to perform various tasks. By combining different tools, users can create a flow that accomplishes a wide range of goals. One of the key benefit of Prompt flow tools is their seamless integration with third-party APIs and python open source packages. This not only improves the functionality of large language models but also makes the development process more efficient for developers.

Variants in Prompt flow
A variant refers to a specific version of a tool node that has distinct settings. Currently, variants are supported only in the LLM tool. For example, in the LLM tool, a new variant can represent either a different prompt content or different connection settings.

Monitoring evaluation metrics descriptions and use cases
Model monitoring tracks model performance in production and aims to understand it from both data science and operational perspectives. To implement monitoring, Azure Machine Learning uses monitoring signals acquired through data analysis on streamed data. Each monitoring signal has one or more metrics. You can set thresholds for these metrics in order to receive alerts via Azure Machine Learning or Azure Monitor about model or data anomalies.


Azure Machine Learning

Manage budgets, costs, and quota for Azure Machine Learning at organizational scale
When you manage compute costs incurred from Azure Machine Learning, at an organization scale with many workloads, many teams, and users, there are numerous management and optimization challenges to work through. In this article, we present best practices to optimize costs, manage budgets, and share quota with Azure Machine Learning. It reflects the experience and lessons learned from running machine learning teams internally at Microsoft and while partnering with our customers.

Plan to manage costs for Azure Machine Learning
This article describes how to plan and manage costs for Azure Machine Learning. First, you use the Azure pricing calculator to help plan for costs before you add any resources. Next, as you add the Azure resources, review the estimated costs. After you’ve started using Azure Machine Learning resources, use the cost management features to set budgets and monitor costs. Also review the forecasted costs and identify spending trends to identify areas where you might want to act.

Announcing managed feature store in Azure Machine Learning
We are excited to announce the public preview of Azure ML managed feature store. Managed feature store empowers machine learning professionals to develop and productionize features independently. You simply provide a feature set specification and let the system handle serving, securing, and monitoring of your features, freeing you from the overhead of setting up and managing the underlying feature engineering pipelines. By integrating with our feature store across the machine learning life cycle, you can accelerate model experimentation, increase the reliability of your models, and reduce your operational costs. This is achieved by redefining the machine learning DevOps experience.


Microsoft Fabric

Fabric Capacities – Everything you need to know about what’s new and what’s coming
Fabric is a unified data platform that offers shared experiences, architecture, governance, compliance, and billing. Capacities provide the computing power that drives all of these experiences. They offer a simple and unified way to scale resources to meet customer demand and can be easily increased with a SKU upgrade.


Microsoft Copilot

Announcing Microsoft Copilot, your everyday AI companion

A new AI companion that unifies the capabilities of Bing, Microsoft 365 and Windows 11. It will provide assistance based on the context and intelligence of the web, work data and user activity. It will be available in Windows 11, Microsoft 365, Edge and Bing.


Large Language Models

Chain-of-Verification Reduces Hallucination in Large Language Models
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models. We study the ability of language models to deliberate on the responses they give in order to correct their mistakes. We develop the Chain-of-Verification (CoVe) method whereby the model first (i) drafts an initial response; then (ii) plans verification questions to fact-check its draft; (iii) answers those questions independently so the answers are not biased by other responses; and (iv) generates its final verified response. In experiments, we show CoVe decreases hallucinations across a variety of tasks, from list-based questions from Wikidata, closed book MultiSpanQA and longform text generation.


Have a great week!