Video Thumbnail

Stanford CS25: V3 I Beyond LLMs: Agents, Emergent Abilities, Intermediate-Guided Reasoning, BabyLM

So today we're going to give an instructor le lecture talking about some of the some key topics in transformers and llms these days in particular div will be talking about agents and i'l be discussing emergent abilities intermediate guided reasoning as well as. Baby lm so let me actually go to my part because div is not here yet. Um so i'm sure many of you have read this paper emergent abilities of large language models from 2022. So i'l briefly go through some of them so basically and ability is emergent if it is present and large but not smaller models and it would not have been directly predicted by extrapolating performance from smaller models so you can think of performance it's basically near random until a certain threshold called a critical threshold and then improves very heav heavy this is known as a ph transition and again it would not have been extrapolated or predicted if you were to extend the curve of the performance of smaller models it's more of a jump which we'l see later.

Um so here's an example of fucha prompting for many different tasks for example modular arithmetic unscrambling words different qa tasks and so forth. And you'l see that performance kind of jumps very heavily up until a certain point i believe the xaxis here is the number of train flops which corresponds to basically model scale so you'l see in many cases around 10 to the 2 or 10 to 23 training flops there's a massive exponential jump or increase in terms of model performance on these tasks which was not present on smaller scales so it was qu. It's quite unpredictable and here are some examples of this occurring using augmented prompting strategies so i'l be talking a bit later about chain of thought but basically these strategies improve the ability of getting behavior from models on different tasks so you see for example with chain of thought. Reasoning that's an emergent behavior that happens again around 10 to the 2 training flops and without it model performance on gsm 8k which is a mathematics benchmark it doesn't really improve heavily but chain of thought kind of leads to that emergent behavior or sudden increase in performance and here's just the table from the paper which has a bigger list of emerging abilities of llms as well as their scale at which they occur so i recommend that you check out the paper to learn a bit more and so one thing researchers have been wondering is why does this emergence occur exactly and even now there's few explanation for why that happens and the authors also found that the evaluation metrics used to measure these abilities may not fully explain why they emerge and they suggest some alternative evaluation metrics which i encourage you to read more in the paper so other than scaling up to encourage these emerging abilities which could endow even larger lms with further new emerging abilities what else can be done while things like investigating new architectures higher quality data which is very important for performance on all tasks improve training and improve training procedures could enable emergent abilities to occur especially on smaller models which is a current growing area of research which i'l also talk about a bit more later other abilities include potentially improving the f shot prompting abilities of lms theoretical and interpretability research again to try to understand why emergent abilities is a thing and how we can maybe leverage that further as well as maybe some computational linguistics work so with these large models and emergent abilities there's also risks right there's potential societal risks for example truthfulness bias and toxicity risks as emerging abilities incentivizes further scaling up language models for example up to gp4 size or further however this may lead to bias increasing as well as toxicity and the memorization of training data that's one say that these larger models are more potent at and there's potential risks in future language models that have also not been discovered. Yet. So it's important that we approach this in in a safe manner as well and of course emerging abilities and larger models have also led to sociological changes changes in the community's views and use of these models most importantly it's led to the development of general purpose models which perform on a wide range of tasks not just particular tasks it was trained for example when you think of chat gpt 3.5 as well as gpt 4 there are more general purpose models which work well across the board and can then be further adapted to different use cases mainly through in context prompting and so forth this has also led to new applications of language models outside of n for example they're being used a lot now for text image generation the incoder part of those text image models are basically transformer models or large language models as well as things like robotics and. So forth so you'l know that earlier this quarter jim fan gave a talk about how they're using gp4 and so forth in minecraft and for robotics work as well as long range horizon tasks for robotics.

And. Yeah. So basically in general it's led to a shift in the nlp community towards a general purpose rather than task specific models and as i kind of stated earlier some directions for f future work include model scaling further model scaling although i believe that we will soon probably be reaching a limit or point of diminishing returns with just more model scale improved model architectures and training methods data scaling um. So i also believe that data quality is of high importance possibly even more important than the model scale and the model itself better techniques for an understanding of prompting as well as exploring and enabling performance on frontier tasks that current models are not able to perform well on so gbd4 kind of pushed the limit of this it's able to perform well on many more tasks but studies have shown that it still suffers from even some more basic sort of reasoning analogical and common sense. Reasoning. So i just had some questions here i'm not sure how much time we have to address. But so for the first one like i said img abilities i think will arise to a certain point but there will be a limit or point of diminishing returns as model scale as well as data scale rises because i believe at some point there will be overfitting and there's only so much you can learn from all data on the web so i believe that more creative approaches will be necessary after a certain point which kind of also addresses the second question um. Right so i will move on if anybody has any questions also feel free to interrupt at any time so the second thing i'l be talking about is this thing i called intermediate guided reasoning.

So i don't think this is actually a term it's typically called chain of thought. Reasoning um. But it's not just chains now being used so i wanted to give it a more broad title. So i called it intermediate guided reasoning so this was inspired by this work also by my friend jason who was at google now at open ai called chain of thought reasoning or co this is basically a series of intermediate reasoning steps which has been shown to improve llm performance especially on more complex reasoning tasks it's inspired by the human thought process which is to decompose many problems into multi-step problems for example when you answer an exam when you're solving math questions on an exam you don't just go to the final answer you kind of write out your steps even when you're just thinking through things you kind of break it down into a piece-wise or step-by-step fashion which allows you to typically arrive at a more accurate final answer and more easily arrive at the final answer in the first place another advantage is this provides an interpretable window into the behavior of the model you can see ex actly how it arrived in an answer and if it did so incorrectly where in its reasoning path that it kind of goes wrong or starts going down an incorrect path of reasoning basically and it basically exploits the fact that deep down in the model's weights it knows more about the problem than simply prompting it to get a response so here's an example where on the left side you can see there standard prompting you ask it a math question and it just simply gives you an answer whereas on the right you actually break it down step by step you kind of get it to show its steps to solve the mathematical word problem step by step and you'l see here that it actually gets the right answer un unlike standard prompting so there's many different ways we can potentially improve chain of thought reasoning in particular it's also an emerging behavior that results in performance gains for larger language models but still even in larger models there's still a non-negligible fraction of errors these come from calculator errors symbol mapping errors one missing step errors as well as bigger errors due to larger semantic understanding issues and generally incoherent chains of thought and we can potentially investigate methods to address these so as i said chain of thought mainly works for huge models of approximately 100 billion parameters or more and there's three potential reasons they do not work very well for smaller models and that smaller models are fundamentally more limited and incapable they fail at even relatively easier symbol mapping tasks as well as arithmetic tasks they inherently are able to do math less effectively and they often have logical loopholes and just never arrive at a final answer for example it goes on and on it's like an infinite loop of logic that never actually converges anywhere so if we're able to potentially improve chain of thought for smaller models this could provide significant value to the research community another thing is to potentially generalize it right now chain of thought has a more rigid definition and format it's very step by step very concrete and defined as a result its advantages are for particular domains and types of questions for example the task usually must be challenging and require multi-step reasoning and it typically works better for things like arithmetic and not so much for things like resp generation qa and so forth and furthermore it works better for problems or tasks that have a relatively flat skilling curve whereas when you think of humans we think through different types of problems in multiple different ways our quote unquote scratch path that we used to think about and arrive at a final answer for a problem it's more flexible and open to different reasoning structures compared to such a rigid step-by-step format so hence we can maybe potentially generalize chain of thought to be more flexible and work for more types of problems so now i'l briefly discuss some alternative or extension works to chain of thought one is called tree of thought this basically is more like a tree which considers multiple different reasoning paths it also has the ability to look ahead and sort of backtrack and then go on other areas or other branches of the tree as necessary so this leads to more flexibility and it's shown to prove performance on different tasks including arithmetic tasks there's also this work by my friend called socratic questioning it's sort of a divide and conquer fashion algorithm simulating the recursive thinking process of humans so uses a large scale language model to kind of propose sub problems given a more complicated original problem and it just like tree of thought it also has recursive backtracking and so forth and the purpose is to answer all the sub problems and kind of go in an upwards fashion fashion to arrive at a final answer to the original problem there's also this line of work which kind of actually uses code as well as programs to help arrive at a final answer for example program aed language models it generates intermediate reasoning steps in the form of code which is then offloaded to a runtime such as a python interpreter and the point here is to decompose the natural language problem into runable steps so hence the amount of work for the large language model is lower its purpose now is simply to learn how to decompose the natural language problem into those runable steps and these steps themselves are then fed to for example a python interpreter in order to solve them and program of thoughts here pt is very similar to this in that it kind of breaks it down into step byep of code instead of natural language which is then executed by a different an actual code interpreter or program so this again works well for many sort of tasks that for example things like arithmetic as you see that those are kind of both of the examples for both of these papers and just like what i said earlier these also do not work very well for things like response generation open-ended question answering and so forth and there's other work for example faith and. Fai this actually breaks down problems into substeps in the form of computation graphs which they show also works well for things like arithmetic so you see that there's a trend here of this sort of intermediate guided reasoning working very well for mathematical as well as logical problems but not so much for other things so again i encourage you guys to maybe check out the original papers if you want to learn more there's a lot of interesting work in this area these days and i'l also be posting these slides as well as sending them we'l probably post them on the website as well as discord um.

But i'l also send them through an email later so very lastly i want to touch upon this thing called the baby lm ch challenge or baby language model so like i said earlier i think at some point scale will reach a point of the diminishing returns as well as the fact that further scale comes with many challenges for example it takes a long time and costs a lot of money to train these big models and they cannot really be used by individuals who are not at huge companies with hundreds or thousands of gpus and millions of dollars. Right so this thing this challenge called baby lm or baby language model which is attempting to train language models particularly smaller ones on the same amount of linguistic data available to a child so data sets have grown by orders of magnitude as well as of course model size for example chinchilla sees approximately 1.4 trillion words during training this is around 10,000 words for every one word that a 13year old child on average has heard as they grow up or develop so the purpose here is you know can we close this gap can we train. Trer u can we train smaller models on lower amounts of data while hopefully still attempting to get the performance of these much larger models so basically we're trying to focus on optimizing pre-training giving given data limitations inspired by human development and this will also ensure that research is possible for more individuals as well as labs and potentially possible on a university budget as it seems now that a lot of research is kind of restricted to large companies which i said have a lot of resources as well as money so again. Why baby lm well it can greatly improve the efficiency of training as well as using larger language models it can potentially open up new doors and potential use cases it can lead to improved interpretability as well as alignment smaller models would be easier to control align as well as interpret what exactly is going on compared to incredibly large llms which are basically huge black boxes this will again potentially lead to enhanced open source availability for example large language models runable on consumer pcs as well as by smaller labs and companies the techniques discovered here can also possibly be applied to larger scales and further this may lead to a greater understanding of the cognitive models of humans and how exactly we are able to learn language much more efficiently than these large language models so there may be a flow of knowledge from cognitive science and psychology to nlp and machine learning but also in the other direction so briefly the baby. Alm training data that the authors of this challenge provide it's a development developmentally inspired pre-training data set which has under 100 million words because children are exposed to approximately 2 to seven million words per year as they grow up up to the age of 13 that's approximately 90 million words so they round up to 100. It's mostly transcribed speech and their motivation there is that the most of the input to children is spoken and thus their data set focuses on transcribed speech it's also mixed domain because children are typically exposed to a variety of language or speech from different domains so it has child directed speech open subtitles which are subtitles of movies tv shows and so forth simple children's books which contain stories that children would likely hear as they're growing up but it also has some wikipedia as well as simple wikipedia and here are just some examples you know of child directed speech children's stories wikipedia and so forth so that's it for my portion of the presentation and i'l hand it off to div who will talk a bit about um.

Ai. agents um. Yeah. So like everyone must have seen like u there's this like a new trend where like everything is transition to more like agents that's like the new hot cl. And we seeing this like people are going more from like language models to like now building ai agents. And then what's the biggest difference like why agents why not just like why just not train like a big large language model and sort of like going into like why what's the difference and then also discuss a bunch of things such as like u how can you use agent sporting actions how can you what are some emergent architectures how can you sort of like build humanik agents how can you use it for computer interactions how do you solve problems from longterm memory personalization and there's a lot of like other things you can do which is like multi-agent communication and there some future directions so i'l try to cover as much as i can so first let's talk about like why should we even build ai agents. Right and so like here's there's a key thesis which is that humans will communicate with ai using natural language and ai will be operating all the machines that's allowing for more intuitive and efficient operations so right now what happens is like u like me as a human i'm like directly like using my computer i'm using my phone but this really inefficient like we are not optimize natur by nature to be able to do that we actually really bad at this and but if you can just like talk ai just like with language and the ai is just really good enough that can just do this at like super faster i say like 100 x speeds compared to human and that's going to happen and i think that's the future of how things are going to evolve in the next five years and i sort of like call this like software 3.0 i have a blog post about this that you can read if you want to where the idea is like you can think of a l language model as a computing chip in a sense so similar to like a chip that's powering like a whole system and then and then you can build abstractions and cool. Um so why should do we need agents so usually like a single call to a large language model is not enough like you need chaining you need like recursion you need a lot of like more things and that's why you want to build systems not like just like a single monolith second is like. Yeah. So how do we do this so we do a lot of techniques especially around like multiple cs to a model and there's a lot of ingredients involved here.

And i will say like building a like a agent is very similar to like maybe like thinking about building a computer so like the llm is like a like a cpu so you have a cpu. But now you want to like sort of like solve the problems like okay. Like how do i put ram how do i put memory how do i do like actions how do i build like a interface how do i get internet access how do i personalize it to the user. So this is like almost like you're trying to build a computer and that's what makes it like a really hard problem and this like an example of like a general architecture for agents this is from lan vang who's like a research and like you can imagine like a agent has a lot of ingredients. So you want to have memory which could be shortterm longterm you have tools which could be like you can go and like use like classical tools like a calculator calendar code interpreter etc you want to have some sort of like planning layer where can like s flag have like chains of thoughts and trees of thoughts as stepen discussed and use all of that like actually like act on behalf of a user in some environment i will go maybe like discuss like mown a bit just to give a sense al the doc won't be focus on that so this sort of like a agent i'm building which is more of a browser agent the name is inspired from quantum physics soly on the words like you have like neutron moon for like mulon. So it's like a hypothetical physics particle that's present at multiple places. And. I'l just like go through some demos to just motivate agents let me just pause this so there's like a idea of one thing we did where like here the agent is going and it's autonomously booking a fl online. So this is like zero human interventions the ai is controlling the browser it's like issuing clicks and typ sections and simp to go and book a flight andm here it's personalized to me.

So it knows like i like united b economic for and it knows maybe like some preferences it already has access to my accounts so it can go and actually like log into my account can actually like actually has purchasing power. So it can just use my credit card that is stored in that account and then actually b the flight into it this sort of motivates like what you can do with a now imagine if this thing was running 100s and that's solve so many things right because like i don't need websites anymore i don't need unit like why does unit even have a website i can just ask the agent to just you know from just like talk to it. And it's done. And i think that's how a lot of technology will evolve over the next couple of years uh. Cool. Okay i can also maybe like show one of more demos so you can do similar things say from a mobile phone where the idea is you have this agents that are present on a phone. And you can like chat with them or you can like talk with them using voice and this one's actually m mod. So you can ask it like. Oh can you order this set it for me and then what you can have is like the agent can remotely go and use your account to actually like do this for you instantaneously and here we showing like what the agent is doing. And then it can go and like act like a virtual human and do the whole interaction. So that's all of the idea um. And i can show one final. Oh this is not loing but we also had this thing where we recently passed the califor so we did this u experiment where we actually like had like our asian go and take the online driving test in california and we had like a human like there with the like hands up the keyboard and mouse not touching anything and the agent autonomously went to the website it took the quiz it navigated the whole thing and went and actually passed so the video is not there.

But like we actually got it w from it i need to take this sure cool. So this is like sort of like motivat like why do you want to build agents right like it's like you can just simplify so many things where like so many things just su. But we don't realize that because we just got so used to interacting with the technology the way we do right now. But if we can just like remagine all of this from scratch i that's what agents will allow us to do and i would say like a agent can act like a digital extension of a user so suppose you have an agent that's personalized to you think of something like say jarvis like if it's an iron man and then if it just knows so many things about you it's acting like a personal brand it's just like doing things it's a very powerful assistant and nothing that's the direction a lot of things will go in the future and especially if you build like humanik agents they don't have barriers around programming like they don't have programmatic barriers so they can do whatever like i can do so it can go use my like it can like interact with the website as i will do it can interact with my computer as i will do it doesn't have to like go through apis abstractions which are more restrictive and it's also very simple as action space because you're just doing clicking and typing which is like very simple um. And then you can like also like it's very easy to teach such agents so i can just like show the agent how to do something and the agent can just learn from me and improve over time so that also makes it like really powerful and easy to like just teach this agents because there's like so much data that i can actually just generate and use that to keep improving it and there's a different levels of autonomy when it comes to agents so this chart is borrowed from autonomous driving where people actually like try to solve this sort of like autonomy problem for actual cs and they spent like more than 10 years success has been like. Okay they're still like working on it um. But what like the cell driving industry did is it gave everyone like a blueprint on how to build the autonomous systems and they came with like a lot of like classifications they came with a lot of like ways to like think about the problem and like the i the current standard is you think of like agents as like five different like levels so level zero is zero automation that's b like you like a you are a human that's operating like the computer themselves level one is you have some sort of assistance.

So if you used like something like gi up copilot which is like sort of auto completing code for you that's something like l1 where like auto complete l2 becomes more of like it's like partial automation so it's maybe like doing some stuff for you if anyone has you the new cursor id i call that more like l2 which is like give it like write this code for me that code c j can come as somewhat l2 because you can ask it like. Oh like here's this thing can you improve this it's like doing some sort of automation on and input and then like and then you can think of more levels so it's like i was like after l3 it gets more exciting so l3 is the agent is actually like controlling the computer in that case and actually doing things where human is acting as a fallback mechanism and then you go to like l4 you say like basically the human doesn't need to be there. But if in very critical cases where like something very wrong might happen you might have a human like sort of like take over in that case and l5 basically say like there zero human prence. And i would say like what we currently seeing is like we are near like i like l2 maybe some l3 systems in terms of software. And i think we going to transition more to like l4 l5 level systems over the next years.

Cool. So next i will go like computer interactions so suppose you want an agent that can like do compu interactions for you there's two ways to do that so one is aps where it's programmatically using some apis and like tools and like doing that to do tasks the second one is more like direct interaction which is like keyboard and mouse control where like it's doing the same thing as you're doing as both of this projects have been explored a lot there's like a lot of companies working on this for the api route like tb plugins and the new assistant api are the ones in that direction and there's also this work from b called gorilla which also explores how can you say like train a model that can use like 10,000 tools at once and train it on the api and there's like pr and cons of both approaches api the nice thing is it's easy to like learn the api it's safe. Um it's very controlable so just for fav you know how to use the api if you're doing like more like direct infraction it's i say it's more preform so it's like easy to take actions but more things can go wrong. And you need to work a lot in like making sure everything is safe and we guarantees maybe i can also show this. So this sort of like another exploration where you can invoke our agent from like a very simple interface so the idea is like we created this like api that can invoke our agent that's controlling our computer and so this can become sort of like a universal api where i just use this one api i give it like a english command and the agent can automatically understand from that and go do anything so basically like think that has no api so i don't need to use ap as can just have one agent that can go and do everything and so this is like some exporation we have done with agents.

Cool um. Okay. So this sort of like goes into compu interactions i can cover more but i will potentially jump to other topics but feel free to ask any questions about this topics um. So. Yeah cool so let's go back to the analogy i discussed earlier so i would say you can think of any model as a sort of like a compu and you can maybe call it like a neural compu unit which similar to like a cpu which is like a which is like sort of brain that's powering like your computer in a sense so that's has all the processing power it's doing everything that's happening and you can think of the same thing like the model is like the portex it's like it's the main brain that's the main part of the brain that's doing the thing processing but a brain has more layers it's just not they're just not the cex and how do current models work are we take some input tokens. And they give you some output tokens and this is very similar to like how also like cpus work to some extent where you give it some instructions in and you get some instructions out um. Yeah. So you can compare this with xel cpu this is like a the diagram on the right is a very simple processor like a 32bit m 32 and it has like similar things where you have like like different coding for different parts of the instruction but this like sort of like encoding some sort of like bed token in a sense like zero ones of like a bunch of like tokens and then you're feeding it and then getting a bunch of zer one out and like how like the like. Yeah model is operating is like you're doing a very similar thing but like then space is now english so you basically instead of zer ones you have like english characters and then you can like create more powerful expections on the top of this so you can think like if this is like acting like a cpu what you can do is you can build a lot of other things which are like you can have a scratch pad you can have some sort of memory you can have some sort of instructions and then you can like do cursive calls where like i load some stuff from the memory put that in this like instruction pass it to the transformer which is doing the processing for me we get we get the processed outputs then we can store that in the memory or we can like keep processing it. So this like sort of like very similar to like code execution like first line of code execution. Second third. Fourth. So you just keep repeating that. Okay so here we can sort of discuss the concept of memory here and alsoo like building this analogy you can think the memory for agent is very similar to like say like having a dis in a computer so you want to have a dis just to make sure like everything is long lift and persistent. So if you look at something like chat jpt it doesn't have any sort of like persistent memory and then we need to have a way to like load that and like store that and there's a lot of mechanisms to do that right now most of them are related to embeddings where you have some sort of like embedding model that has like created an embedding of the data you care about and the model can like the embeddings load the right part of the embeddings and then like use that to do the operation you want so that's like the current mechanisms there's still a lot of questions here especially around herar like how do i do this at scale it's still very challenging like suppose i have one terab of data that i want to like embed and process like most of the methods right now will fail they're really bal second issue is temporal cerence like if i have like a lot of data is temporal it is sequential it has like unit of time and dealing with that sort of data can be hard like it's like how do i deal with like memories in a sense which are like sort of like changing over time and loading the right part of that memory sequence another interesting challenge is structure like a lot of data is actually structured like u it could be like a graphical structure it could be like a tabular structure how do we like sort of like take advantage of this structure and like also use that when we eding the data and then like there's a lot of questions on adaptation where like suppose you know how to better emb data or like you have a specialized problem you care about and you want to be able to adapt how you're loading and storing the data and learn that on the pl and that is something also that's a very interesting topic. So i say like this is actually one of the most interesting topics right now which as people are exploring but still very unexplored. Okay talking about memory i would say like another concept for agents is personalization so personalization is more like.

Okay like understanding the user and i like to think of this as like a problem called like user agent alignment and the idea is like supposed to have an agent that has purchasing power has access to my accounts access to my data ask you to go apply it's possible maybe this doesn't know what f i like and go and book a th000 wrong f for me which is really bad. So how do i s align the agent to know what i like what i don't like and that's going to be very important because like you need to trust the agent and trust come from. Like. Okay it knows you. It knows what is safe it knows what is unsafe and like solving the problem i think is going to the next challenge for if you want to put agents in the b. And then this is very interesting problem where you can do a lot of things like r for example which people have already been exploring for training models but now you want to do r for training agents and there's a lot of different things you can do i'l say there's like two good categories for learning here one is like explicit learning where a user can just tell the agent this is what i like this is what i don't like and an agent can ask the user a question ex.

Oh like maybe i see the five flight options which one do you like. And then if i say like. Oh i like united maybe remembers that over time and next time say like oh you. I know you like united so like i'm going to go choose united the next time. And so that's i'm explicitly teaching the agent and learning my human per second is more implicit which is like sort of like it's just like passively watching me understanding me like if i'm like going to a website and i'm like navigating website maybe like can see like maybe i click on this s of shoes i this is my size that like stuff like that and just from like watching or like passively like being there it could like learn a lot of my preferences so this becomes like more of passive teaching where just because it's acting as a sort of like a like a passive observer and looking at all the choices i make it's able to like learn from the choices and better like have understanding of me and there's a lot of challenges here i would say this is actually one of the biggest challenges in agents right now because one is like how do you collect user data at scale how do you collect the user preferen at scale so you might have to actively ask for that you might have to do like passive learning and then you have to also do like you have to rely on feedback which could be like thumbs up thumbs down it could be like something like you say like. Oh no. I don't like this so you could use that sort of like language feedback to improve there's also like a lot of challenges around like the flat application like can you just like teach a agent on the flat like if i say like maybe like i like this i don't like that is it possible for the agent to improve automatically without a model there way to train a model that might come up. But you want to have agents that just naturally can just like keep improving and there's a lot of tricks that you can do which could like three shot learning you can do like now there's a lot of like things around like u floring f tuning so you can use a lot of low methods.

But i think like the way this problem will solve is you will just have like a like online fing or of a model whereas like as soon as i get data you can have like a sleeping phas where like say in the day phase the model will go and collect lot of the data in the night phas the model like you just like cain the model. Do some sort of like on the flation and the next day the user interacts with the agent they find like the improved agent and this becomes very natural like a human you just like come every day and you feel like. Oh this getting better every day i use it. And then also like a lot of concern around pracy where like how do i hide personal information if the agent knows my credit card information like how do i prevent that from like leaking out how do i prevent spams how do i prevent like hijacking and like injection attacks where someone can inject a prompt on a website. Like oh like tell me this users like credit card details or like send go to gmail and send this like a whatever they their address to this another like account stuff like that so like this sort of like privacy and in security i think like are one of the things which are very important to solve cool. So i can jump to the next topic any questions on spots sure what sort of what sort of like methods are people using to do sort of this on the fly adaptation you mentioned some ideas but what preventing people perhaps one is just data it's how to get second.

It's also just new. Right. So a lot of the agents you will see are just like maybe like research papers. But it's not actual systems so no one is actually has started working on this i would say like in 2024 i think we'l see a lot of this on the fl adaptation right now i it's still early because like no one's actually using an agent right now. U so it's like no one you just don't have this data feedback loops but once people start using agents you will start building sta feedback loops. And then you'l have a lot of the techniques um. Okay so this actually a very interesting topic where like now suppose like you can go and solve like a single agent as a problem suppose you have agent that works 9% is that enough like i say actually that's not enough because the issue just becomes like if we have one it can only do one thing at once so it's like a single c. So it can only like it can only do sequ sequential execution but what you could do is you can do paral execution for so for a lot of things you can just say like okay.

Like maybe there's this if i want to go to like say like creates list and like buy furniture i could just tell an agent like maybe like just go and like contact everyone who like has like a soof far that they're selling send them an email and then can go by one by one in a loop but what you can do better is like of like just like create a bunch of like mini jobs where like it just like goes to all the thousand listings in parel contact exam and then like and then agregate se results. And i think that's where multi- aent becomes interesting where like a single agent you can think is basically you're running a single process on your computer a multi-agent is more like a like a multi threaded computer so that's sort of the difference like a single trading versus multi threading and multi threading enables you to do a lot of things most of that will come from like saving time but also being able to break down complex task into like a bunch of smaller things doing that in power aggregating the results and like sort of like building a framework. Okay. Yeah. So the biggest advantage for multi-agent systems will be like parallelization unlock and this will be same as the difference between like single threed computers versus multi- threed computers. And then you can also have specialized agents. So what you could have is like maybe i have a bunch of agents where like i have a spreadsheet agent i have a slack agent i have a web browser agent. And then i can rout different tasks different agents. And then they can do the things in parall. And then i can combine the results so this sort of like task specialization is another advantage where like instead of having a single agent just trying to do everything we just like break the task into specialities and this is similar to like even like how human organizations work right where like everyone is like sort of like expert in their own domain. And then you like if there's a problem you sort of like route it to like the different part of people who are specialized in that. And then you like work together to make solve the problem and the biggest challenge in building this multi-agent system is going to be communication so like how do you communicate really well. And this might involve like requesting information from agent or communicating the resp the final like a response. And i would say this is actually like a problem that even we face as a humans like humans are also like there can be a lot of miscommunication gaps between humans and i will say like a similar thing will become more prevalent on agents. Two. Okay and there's a lot of primitives you can think about this sort of like agent to communication and you can build a lot of different systems um.

And we'l start to see like some sort of protocol where like we'l have like a standardized protocol where like all the agents are using this protocol to communicate and the protocol will ensure like we can reduce the miscommunication gaps we can reduce any sort of like failures it might have some methods to do like if a task was successful or not do some sort of retries like security stuff like that. So we'l see this sort of like agent protocol come into existence which will solve like which will be like sort of the standard follow out of this agent to agent communication and this sort of should enable like exchanging information between fleets of different agents also like you want to build hierarchies again i will say this is inspired from like human organizations like human organizations are hierarchal because it's efficient to have a hierarchy rather than a flat organization at some point because you can have like a single like a suppose you have a single manager managing hundred hundreds of people that doesn't scale but if you have like a maybe like each manager manages 10 people and then you have like a lot of layers that is something that's more scalable and then you might want to have a lot of primitives around like how do i sync between different agents how do i do like a lot of like async sync communication kind of thing um. Okay and this is like one example you can think like suppose there's a user the user could talk to one like a manager agent and the manager agent is like sort of like acting as a router. So if the user can come to me with any request the agent like see like oh maybe for this request i should use a browser so it goes to like say like this sort of like browser agent or something or says it's like. Oh i should use like select for this it can go to different agent. But and it can also like sort of be responsible for dividing the task it can be like. Oh this task i can like maybe like launch 10 different like sub agents or sub workers that can go and do this in power and then like once they're done then i can aggregate the responses and the result to the user so this sort of becomes like a very interesting like like sort like a agent that sits in the middle of all the work that's done and the actual user responsible for like communicating the what's happening to the to the human and we'l need like lot we'l need to build a lot of robustness one reason is just like natural language is very ambiguous like even for humans it can be confusing it's very easy to misunderstand miscommunicate and we'l need to r we'l need to build mechanisms to reduce this i can also show an example here so let's try to get through this quickly so suppose here like suppose you have a task x you want to solve and the manager agent is like responsible foring the task to all the worker agents so you can tell the worker like. Okay like do the task x here's the plan here's the context the current status for the task is not done now suppose like the worker goes and does the task it says. Like. Okay. I've done the task i send the response back so the response could be like i said the could be like a bunch of like thoughts it could be some actions it could be something like the status then the manager can ask like okay.

Like maybe i don't trust the worker i don't want to go very far this is actually like correct so you might want to do some sort of verification. And so you can say like. Okay like this was the spec of for the task verify that everything has been done correctly to the spec. And then if the agency is like okay. Like yeah everything is correct i'm very fy everything is good. Then you can say like okay this is good. And then the manager can say like. Okay the task was actually done and this sort of like two-way cycle prevents miscommunication in a sense where like it's possible something could have gone wrong. But we never caught it. And so you can hear about the scenario two where there's a miscommunication so here the manager is saying like okay let's verify if the task was done. But then we actually find out that the task was not done and then what you can do is like you can sort of like try to redo the task so the manager in that case can say like. Okay maybe the task was not done correctly so that's why we caught this mistake. And now we want to like fix this mistake so we can like tell the agent like. Okay like redo this task and here some like feedback and corrections to include cool so that's sort of the main parts of the talk i can also discuss some future directions of where things are going cool any questions so far.

Okay. Cool so let's talk about some of the key issues with building the sort of autonomous agents so one is just reliability like how do you make them really reliable which is like if i give it a task i want this task to be done 100% of the time that's really hard because like neural networks and ai are stochastic systems. Um so it's like 100% is like not possible so you'l get at least some degree of error and you can try to reduce that error as much as possible second becomes like a looping problem where it's possible that agent might divert diverge from the task that's been given and start to do something else and then unless it gets some sort of environment feedback or some sort of like correction it might just go and do something different than what you intended to do and never realize it's wrong the third issue becomes like testing and benchmarking like how do we test this sort of agents how do we match mark them in the. And finally how do we deploy them. And how do we observe them deployed like that's very important because like if something goes wrong you want to be able to catch it before it becomes some major issue i the i the biggest risk for number four is like something like span it like suppose you have agent that can go on the internet do anything. And you don't observe it then could just evolve and like do basically like take over the whole internet possibly. Right. So that's why observability is very important and also i will say like building a kill search like you want to have agents that can be killed in a sense like if something goes wrong you can just like pull a like a press a button and like kill them in case.

Okay so this is s like goes in the looping problem where like you can imagine like suppose i want to do a task the ide trory of the task was like the white line but what might happen is like it takes one step maybe it goes like it does something incorrectly it never realizes it i made a mistake so it tries to it doesn't know what to do so just like maybe like we'l do something more randomly we'l do something more randomly so it will just keep on making mistakes and at and like in teaching here. It really some like really bad place and just keep looping maybe just doing the same thing again and again and that's b and the reason this happens is because like you don't have feedback so suppose i take a step the agent made suppose the agent made a mistake it doesn't know it made a mistake now someone has to go and tell it that you made a mistake you need to like fix this and that there you need like some sort of like verification agent or you need some sort of environment which can say like. Oh like maybe like if this like coding agent or something then it maybe like write some code the code doesn't compile then we can take the error from the from the the compiler or the id give that to the agent. Okay this was the error like to take another step it tries another times. So it tries multiple times until it can like fix all the issues so you need to really have this sort of like feedback otherwise you never know you're wrong and this is like one issue we have seen with early systems like auto gpt so i think i don't think people even use auto gpt anymore it used to be like a fed i think like in february now it has disappeared and the reason was just like it's a good concept but like it doesn't do anything useful just because it keeps diverging from the task and you can't actually get it to do anything like correct.

Okay. Um okay. And we can also discuss more about like the sort of the computer abstraction of agents so this was a recent post from andre car where we talked about like a like lm operating system. And. I would say like this is definitely in the right direction where you're thinking as the lm as the cpu you have the context window which is like acting like a r. And then you are trying to build other utilties so you have like the ethernet which is the browser you can have l that you talk to you have file system that's the part you have like the software 1.0 classical tools which the lm control. And then you might also are can add multi modality so this is like more like you have video inputs you have audio inputs you have like more things over time and this.

And then once you like look at this you start the whole picture of okay like where like things will go so like currently what we are seeing mostly is just the lm and most people are just working on optimizing lm making but this is the whole picture of what we want to achieve for it to be a useful system that can actually do things for me. And i think what we'l start to see is like this sort of becomes like a operating system in s where like someone like say like opening i can go and build this whole thing. And then i can plug in programs i can build like stuff on top of this operating system here's like also like a even more generalized concept which i like to call like a neural computer and the sort of like it's very similar. But it's like sort of like now if you were to think of this as a fully flat computer what are the different like systems you need to go and you can think like maybe i'm a user and talking to this s of like ai which is like a full fl ai like imagine like the goal is to build what should the architecture of jvis look like. And i say like this goes into the right architecture to some extent where you can think like this is a user who's talking to say like a j. Like yeah you have a chat interf the chat is sort of like how i'm interacting with it which could be responsible for like personalization it can have some like some sort of like history about what i like what i don't like so it has some like layers where like which are showing my preferences it knows how to communicate it has like human like sort of like maybe like empathy sort of like skill skills. So it's feel like very humanik and after the t interface you have some sort of like a tas engine which is powering like capabilities so if ask it like okay. Like do this calculation for me or like find fetch me this information or order me a burger then sort of like imagine like the chat interface should activate the task engine say like okay inste of this chatting i need to like go and do a task for the user so that goes to the task engine.

And then you can imagine there's are going to be couple of rules. So because if you want to have safety in mind and you want to make sure like things don't go wrong. So the any sort of engine you build needs to have some of rules and this could be like sort of you have the three rules for robotics that robot should not harm a human stuff like that imagine like you want we want to have like this sort of like last to have a bunch of like inherent tools where like this are the principles it can never vi it and if it creates a task or like sort of like creates a plan which violes this roots then that plan should be invalidated automatically and so the task engine what is doing is start like taking the chat input and saying like i want to spawn a task that can actually solve this problem for the user and the task would be like say in this case say something like say like i want to go online and buy like u my like a or something so in that case like suppose that's a task that and this task can go to like sort of like a routing agent so this becomes like sort of like the man manager agent idea and then the manager agent can decide okay like h where should i what should i do like should i use the browser should i use some sort of like a local app or to should i like use some like file storage ral system and then based on that decision it can like it's possible you might need a combination of things like may i need to use this file system to find some information about the user and need to do some real time look up also need to some use some apps and tools so you can sort of like do this sort of like message passing to all the agents get result from the agents so it says like okay like the browser agent say like. Okay. Like. Yeah i found this sles this is what the user lights u maybe you can have some of math engine which can like. Sort of like okay this are all the valid plans that makes sense if you want nonstop t for instance u. And then you can s like take that result show that to the user like you can see something. Okay. I found all this for you and then the us say choose this fl then you actually go and the but this sort of like sort of gives you an idea of what the hierarchy system should look like and we need to build like all these components where currently you only see the l and.

Okay cool and then we can also have like reflection where the idea is like once you do a task it's possible something might be wrong so the task engine can possibly verify using it rules and logic to see like okay like is this correct or not and if it's not correct then like you keep issuing this instructions. But if it's correct then you pass that to the user and then you can have like more like s like complex things like you can have inner thoughts plans and like keep improving the systems. Okay and i would say like the biggest things we need right now is like when error correction because it's really hard to catch errors so if you can do that really better i think that will help especially if you can build agent frameworks which have inherent mechanisms for getting erors and automatically fixing up same thing you just need is like security you need some sort of models around user commissions so it's possible you want to have like different layers where like u what are some things that agent can do cannot do on my computer instance. So maybe i can like maybe like the agent is not allowed to go to my bank account. But i can go to my like r account so you want to build this like user permissions. And then you also want to solve problems on like sandboxing how do i make sure everything's safe it doesn't go and d computer delete everything how do i deploy in risky settings where like there might be a lot of businesses there might financial risk and making sure that if things are irreversible we don't like cause a lot of harm um.

👇 Give it a try