TWIL: October 16, 2023

These past weeks have been very challenging and I ended up missing my TWILs. I’m highlighting two episodes of Lex Fridman’s podcast, namely the interview with Mark Zukerberg in the metaverse, which is quite impressive. Also, a set of very interesting articles on Large Language Models, with AutoGen framework and LLMOps, and news about Microsoft Fabric. Have fun!


Lex Fridman Podcast

Episode 397: Greg Lukianoff: Cancel Culture, Deplatforming, Censorship & Free Speech
Lex Fridman interviews Greg Lukianoff, a free speech advocate, first-amendment attorney, president of FIRE, and co-author of two books on the state of free speech and academic freedom in America. The conversation goes through topics such as cancel culture, freedom of speech, religion, platforming and deplatforming, diversity, equity and inclusion, hate speech, social media, depression and hope.

Episode 398: Mark Zuckerberg: First Interview in the Metaverse
Lex Fridman interviews Mark Zuckerberg, the CEO of Meta, formerly known as Facebook. They discuss the metaverse, Quest 3, AI, and the future of humanity.

Building the Future AI Portugal Podcast

Portuguese podcast born out of the annual Building the Future event where technologies, ideas and initiatives that transform our world are discussed, with a particular focus on Artificial Intelligence. The episodes of this podcast are spoken in portuguese, as are their descriptions.

Quanto tempo falta para os Robots serem mais inteligentes do que os humanos? (in Portuguese)
Hoje decidimos pegar num tema que tem sido algo falado e especulado, que é o tema de Quanto tempo falta para os Robots serem mais inteligentes do que os humanos?. Não que acreditemos que alguém consiga efetivamente responder a esta questão com qualquer grau de precisão, mas a verdade é que esta pergunta é o ponto que assombra muita gente e que no fundo está por trás de muita da discussão sobre os perigos da IA e os medos que a IA se torne perigosa contra os humanos…

Large Language Models

AutoGen: Enabling next-generation large language model applications
AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.

Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks
Microsoft has joined the race for large language model (LLM) application frameworks with its open source Python library, AutoGen. As described by Microsoft, AutoGen is “a framework for simplifying the orchestration, optimization, and automation of LLM workflows.” The fundamental concept behind AutoGen is the creation of “agents,” which are programming modules powered by LLMs such as GPT-4. These agents interact with each other through natural language messages to accomplish various tasks.

An Introduction to LLMOps: Operationalizing and Managing Large Language Models using Azure ML
In recent months, the world of natural language processing (NLP) has witnessed a paradigm shift with the advent of large-scale language models like GPT-4. These models have achieved remarkable performance across a wide variety of NLP tasks, thanks to their ability to capture and understand the intricacies of human language. However, to fully unlock the potential of these pre-trained models, it is essential to streamline the deployment and management of these models for real world applications. In this blog post, we will explore the process of operationalizing large language models, including prompt engineering and tuning, fine-tuning, and deployment, as well as the benefits and challenges associated with this new paradigm.

Deploy Your LLM Chatbot With Retrieval Augmented Generation (RAG), llama2-70B (MosaicML inferences) and Vector Search
In this tutorial, we will cover how Databricks is uniquely positioned to help you build your own chatbot using Retrieval Augmented Generation (RAG) and deploy a real-time Q&A bot using Databricks serverless capabilities. We will leverage llama2-70B-Chat to answer our questions, using MosaicML Inference API.

Microsoft Fabric

Medallion architecture: best practices for managing Bronze, Silver and Gold
Many of my clients employ a Medallion structure to logically arrange data in a Lakehouse. They process incoming data through various stages or layers. The most recognized layout incorporates Bronze, Silver, and Gold layers, thus the term “Medallion architecture” is used. Although the 3-layered design is common and well-known, I have witnessed many discussions on the scope, purpose, and best practices on each of these layers. I also observe that there’s a huge difference between theory and practice. So, let me share my personal reflection on how the layering of your data architecture should be implemented.

Microsoft Fabric release plan documentation
The Microsoft Fabric release plan documentation announces the latest updates and timelines to customers as features are prepared for future releases.

Microsoft Fabric September 2023 update
Welcome to the September 2023 update. We have lots of features this month including updates to the monitoring hub, Fabric Metrics app, VS code integration for Data Engineering, Real-time data sharing and many more. Continue reading for more details on our new features!

Announcing: Column-Level & Row-Level Security for Fabric Warehouse & SQL Endpoint
We are excited to announce the availability of Column-Level and Row-Level Security in Fabric Warehouse & SQL Endpoint in Public preview in all regions! In today’s data-driven world, organizations are constantly collecting vast amounts of sensitive information that fuels their operations, decision-making processes, and competitive edge. While data accessibility is essential for business success, ensuring the confidentiality, integrity, and privacy of this information is equally critical. Enter Column-Level and Row-Level Security, two powerful data security strategies that tackle exactly these issues for your organization.

Microsoft OneLake adds shortcut support to Power Platform and Dynamics 365
We are excited to announce you can now create shortcuts directly to your Dynamics 365 and Power Platform data in Dataverse and analyze it with Microsoft Fabric alongside the rest of your OneLake data. There is no need to export data, build ETL pipelines or use third-party integration tools. Simply click Link to Microsoft Fabric and you can start working with your data immediately.

Announcing the Data Activator public preview
We are thrilled to announce that Data Activator is now in public preview and is enabled for all existing Microsoft Fabric users. Data Activator is the Fabric experience that lets you drive automatic alerts and actions from your Fabric data. With Data Activator, you can eliminate the need for constant manual monitoring of operational dashboards. Anyone in your organization can use Data Activator because it utilises a simple visual interface that requires no technical knowledge.

Responsible AI

Responsible AI Toolbox
Responsible AI is an approach to assessing, developing, and deploying AI systems in a safe, trustworthy and ethical manner. The Responsible AI toolbox is a collection of integrated tools and functionalities to help operationalize Responsible AI in practice. With the capabilities of this toolbox, you can assess your models and make user-facing decisions, faster and easier.

Have a wonderful week!