TWIL: July 9, 2023

This week I’m recommending the conversation between Lex Fridman and Jimmy Wales (co-founder of Wikipedia) as well as Satya Nadellas’s interview for Freakonomics Radio podcast. Both are very interesting. Also, a set of articles on Cosmos DB dedicated gateway, Prompt Engineering, the new Code Interpreter plug-in for ChatGPT and CoDi, a new any-to-any generative AI model from Microsoft. Have fun!


Podcasts

Lex Fridman Podcast

Episode 385: Jimmy Wales: Wikipedia
Lex Fridman interviews Jimmy Wales, the co-founder of Wikipedia, about the origin, vision, challenges, and future of the online encyclopedia1. They discuss the philosophy of knowledge, the role of community, the impact of language models, and the importance of neutrality and human dignity in biographies. They also share some personal stories, insights, and opinions on various topics related to Wikipedia and beyond.

Freakonomics Radio

Episode 547: Satya Nadella’s Intelligence Is Not Artificial
But as C.E.O. of the resurgent Microsoft, he is firmly at the center of the A.I. revolution. We speak with him about the perils and blessings of A.I., Google vs. Bing, the Microsoft succession plan — and why his favorite use of ChatGPT is translating poetry.


Azure Cosmos DB

Azure Cosmos DB dedicated gateway – Overview
A dedicated gateway is server-side compute that is a front-end to your Azure Cosmos DB account. When you connect to the dedicated gateway, it both routes requests and caches data. Like provisioned throughput, the dedicated gateway is billed hourly.

Azure Cosmos DB integrated cache – Overview
The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you don’t need to spend time writing custom code for cache invalidation or managing backend infrastructure. The integrated cache uses the dedicated gateway within your Azure Cosmos DB account.

Azure Cosmos DB integrated cache frequently asked questions

The Azure Cosmos DB integrated cache is an in-memory cache that is built in to Azure Cosmos DB. This article answers commonly asked questions about the Azure Cosmos DB integrated cache.


Microsoft Fabric

Data Factory June 2023 Monthly Update
Welcome to the first edition of the Microsoft Fabric Data Factory monthly update! In the first week of each month, visit our blog site to see what our teams have been up to. This blog will cover the Data Factory experience in Fabric: Dataflows Gen2 and Data Pipelines. Read on to find all the announcements we have for you for this month!


Generative AI

What AI can do with a toolbox… Getting started with Code Interpreter
Everyone is about to get access to the single most useful, interesting mode of AI I have used – ChatGPT with Code Interpreter. I have had the alpha version of this for a couple months (I was given access as a researcher off the waitlist), and I wanted to give you a little bit of guidance as to why I think this is a really big deal, as well as how to start using it.

Exploring Advanced Techniques in Prompt Engineering: Harnessing the Power of AI Systems
Prompt engineering is an essential component of maximizing the potential of artificial intelligence, particularly in the realm of natural language processing. The field is multidimensional, necessitating a deep understanding of both AI models and their operational contexts. Techniques like few-shot learning, chain-of-thought prompting, and self-reflection, as well as strategies for effectively structuring prompts and selecting model parameters, form the bedrock of successful prompt engineering.

Prompt Engineering 101
In the fascinating world of large language models, the primary task is to predict text sequences. If you have had prior experience with ChatGPT or similar models, you might already be familiar with the occasional generation of false and absurd responses, a phenomenon that is also known as hallucination. In this blog post, we will delve deeper into this intriguing topic and explore strategies to mitigate and overcome this challenge.

Breaking cross-modal boundaries in multimodal AI: Introducing CoDi, composable diffusion for any-to-any generation
Imagine an AI model that can seamlessly generate high-quality content across text, images, video, and audio, all at once. Such a model would more accurately capture the multimodal nature of the world and human comprehension, seamlessly consolidate information from a wide range of sources, and enable strong immersion in human-AI interactions. This could transform the way humans interact with computers on various tasks, including assistive technology, custom learning tools, ambient computing, and content generation.


Have a great week!