Files
reflector/real_time_transcript_with_timestamp_06-21-2023_153233.txt
2023-06-23 12:16:10 +05:30

1 line
60 KiB
Plaintext

{'text': " that's because I'm looking. I just started the recording so we've got the Python, the WISJACS live real time happening and I can share my screen there and do so we have a letter context but then JDC you can all. Okay, great. That's working. And then, um, they see, you said you can't record the meeting. To start recording. Yeah, I'm going to close my room. I don't have the record about this. we have the real time transcription because Google needs that for the audio plus the transcript plus the timestamps. So cool. Okay, so we can have our discussion as well. I'm planning to begin ask questions or how do you want to go about this? Yeah, so let me make a quick introduction. So what I, it all started with the demo from, uh, uh, Polyantier. So as you know, in the video, so my guess by looking into the videos or I was amazed how much they could achieve just with using like open source simple models. So I still have a lot of experience. start to, it means that they have a definition of entities and actions that they can take. So then they control this to an LLM. So I started exploring with that with Jammo trying to I'm a bit kind of work. So in the NID side, that one good pattern or one good way to achieve this was using GraphQL. So GraphQL is this query language that describes operation mutations and queries. over data and know how to operate. So that means that we shouldn't need to find tune on this. So it's a similar situation of how it works for SQL queries. So people from OpenAI, I am many companies are I mean, that we could map to this, it's Cp because it's some actions that you need to take and you want to out of our human language query, you want to know the parameters of a function to call and which function to. I can take you over and it's amazing that Corey, Meahaw and Chonar at the NIMIKOS. I'm reaching to the point that I would love to have more ideas. So I started by like mocking this kind of entities that could work with ZB. So we have like employees, candidates. I just took a look at out of the comments that we support right now. Reminders. Let me see maybe I'm just sharing One another whole screen. Okay, I think this will solve it So let me go back so you know Cp it's this like common line, we will use it. So there are some entities that are employees. I just like mocked some of them. There is a concept of candidate for the interviews. There is also like reminders. And there are some queries that you can take. So let's say the get-in-play. this is part of the first test that I did. So let me show you here how this can work. So let's say in the first day of the library memory to call how to be. into, sorry, I was in the middle of some, some, eights here. Ten more gods. They always show up. Yeah, yeah. I'm, I'm refructuring like, have a little bit of these, but you see. kong ang to call it. So it works very well for a bunch of cases, you know. So it can map like many queries. So let's say vacations. Thanks. Please. Well, you see, if you... So you see, well, it's very, it's very good for this kind of thing. So it creates the occasion. It feels all the things here. So this can basically be adapted to any like. like, sick cases, such as the one of automating a CLI. So that means adding a common line interface for many things, as long as you can describe the things like this. And this is a very like, like, flexing. So that was the first thing I did, but this has a challenge. What's the challenge that I need to fit this into the LLM? So LLM's have a context window, so you cannot put more text data than it's a program too. the range, let's say word tokens. You know, so if we want to use this for bigger things, we need to come up with an strategy. So that's what I've been doing. So what I did is This I create like a sub tree because this this thing generates us a tree what I mean by this So let's say I know this is a mutation that I need to to to use so there are two things reference here to to to type There is a, I come to employee here and they're not further like references. So, this case is very simple, but there are some others that are there are more complex. So, I've been implementing strategies for this. So, one strategy. So I can use that to create embeddings and then identify or just use that strategy of the top k to know which one should I should include. create the strategy where I directly ask the LLM. So instead of using embeddings, I ask the LLM as if they were tools that is the thing that is now like very popular. So I ask the LLM. This is the query I have. This is the tools that I have to accomplish that with their own. on the screen. I create again this, this like sub tree of the graph QL and then I can obtain the query. So that was a challenge because even that strategy with using the other lem, in the case of of says me, concretely that. It's number of queries is so big. Unutations, that it doesn't fit. So I need to come up with the embedding strategy. So there, you usually, I mean, you can select strategies. So let's say, So this is the query I'm giving for the Rails organization get their Rails repo on the last full request with status open. This is the graphQL that I'm passing. The schema strategy I'm using embeddings. So I'm just creating them bedding. and Did it in our way and stable state right now? Let me try to uncommon some things that I'm I Of course. Yeah, it's very funny to use the LOM strategy here. So did you actually can take a while? and just getting the parts is too big but with the anthropic LLM I can make it it does fit. So, how many tokens is there if I mean like... How many tokens is there? But to be able to fit that to the LLM, you need an strategy to tell the LLM which parts it needs to use. So there are two strategies that I implemented. One is using embeddings. So I grab the with. book and answer this. So I cannot definitely fit a GitHub GraphQL because look how big it is, not even in the in the Vigger ones, in the Vigger and Tropic. So it's 50. So even the 100k window of this LLM. So I just need to pick the entities that are relevant with whatever strategy. Right now there are two like LLMs and embeddings. So I managed to get it that thing queries are they all, you know, do they have rich comments like that? Like most of of big APIs do have that. Okay. And that's a requirement. And because if you got if you got rid of the comments than the LM would not maybe note as well. Yeah, as well. But like I guess there's a trade off rate like maybe you could eliminate the comments and get a better result. because not like foretell that. So I've implemented some like strategies. So for instance, I have one where I can just fit in the like the tools that I have available. And then when I include the graph goes reasons but because it just doesn't fit but yeah all of those combinations are available I can throw a control in a graph QL without comments and it might work if if this variable names are self descriptive as you know as you have seen these one of these things work. So there is this explorer from see. So I ask this get They're rosfick. Just by. And it's just really complex. That's the output that you got from your script right there. And this is the output. These one on the left hand side or the one that is highlighted. that's what I got and then he got the stars and then you can. So that's let me try another one. Yeah, and maybe I could share a use case that where we were talking about, okay, see. building. They're really pushing tons of small meetings as I mean, if I just deliverage these open data standards to collect data for like incompatible appliances, like let's say Mixers on a manufacturing line from different manufacturing. actors to do that with just their knowledge. So creating the open data standards is a chunk of the work. Then improving the gooey interface that they have to then create these graph QL queries is another and I think where We can specific appliances and data elements. How do we take that as that further, basically use language or jdc's describing here to say, hey, I want to get a historian graph for all of the mixers that we had on the line. And that would put that specific data for them, right? That's kind of the idea we're going to explore at least. So let me show you, I already implemented SESME. So let me show you this. So what I did, I went to describing. So they have this query that it's not an easy one. So return a list of times here is sample values for a given instance, that we'll specify time range. So I did, I further described this. because it's within the start time blah blah blah and time. Take a look at this query and it's exactly just a couple of more things just the first. But it's basically got it right. So get raw data with. I don't know if necessarily the users of this would be very clear about the information they need and have an awareness of that. I understand most engineers in manufacturing spaces are familiar with that type of notation. but I do say they select or I mean it's a ghost thing but let's say I would say I meet night and I think this is a new. But I think if you manage to squeeze a Yeah, okay. There we go. Well, it's very fast, too. That's yeah, because I'm using on tropic and I'm managed to use a cheaper model. That's another story of like tweaking the prompt. So I could like first. This is quick, but right now it's like the embellings. It's using like embelling from OpenAI and getting the tools that I need to use. And then when I get the graph QL, I use it in Anthropic. 15,000 yeah yeah 15,000 just let's say like 30k 30k tokens making our shift so that goes into every I guess prompt has to have all 30,000 I don't get the quick. What I do too. I can give as much as 100K tokens as context that it's the prom blues all the day that that include. And just the parts that are really one to the query. And we've done some research. Yeah, and I guess my question is if you are passing the full 15,000 lines with every query or every But it's less than that because the whole idea, I mean, once let me, let me explain you better. So the strategy that it's the embeddings was able to identify that I need to use this, these two fields. I see. Okay. and the So, this query further references this filter, that includes a bunch of other fields. You get the point like it's these things like reference other types so you need to include them like all Yeah, yeah, you do like an LLM kind of tree shaking To minimize yeah, yeah, so first I identify which mutations or queries I will need And I'll include just the things that I will know. I will do the same thing. And all the dependencies. Yeah, exactly. Yeah. I'm doing this. Yeah. I was wondering what do you embed exactly? Is it only the human language? which this is the the the this is the graphQL prompt that I use for anthropic so it says I will give you the following graphQL schema and I include the this smaller schema that I'm able to. figure and it's able to figure it out. Like these rules, the context they time and that's it. Yeah, but on the open AI site, when you generate the embeddings to to find chemist strategies just like an interface. So full schema, whatever schema I'm pass I return the whole schema. LLM strategy. So it it works like this. Get two descriptions by operation. So what it does. guys, let's say I get this, just this part, let's just code of conduct and then I put two dots and say like this, look up the code of conduct by ski. So I pass all of this in this format. You see here that key value? That's the LLM strategy. Does it incorporate the comment on that type name as well? Because it's like the whole tree right? Yeah. And this gets plugged into these tools from your health laging that they're me al-Qoma. So that's the LLM strategy. The the other strategy is simpler and it's embedding strategy. So for the embedding strategy I basically construct these document. the LLM strategy do you run the LLM once for every single field of every type in the source schema? No, no, no, it's not I chunk I I chunk this like code of condo to dot look at times that can like be even bigger than the context window, but I'm not talking that much, but that's not like very hard to do. It's just like have a buffer, yeah. Yeah, margin ever. Yeah. And limitations and it still it works. It doesn't work for Sesame so that's why I wrote embeddings Because embeddings, you know, I just create embedding make the query and Come up with the tools. Let's say the tools are the queries and mutations that I will need get queries, a mutation, descriptions, and then I create this document using the jamaindex capability. So here is basically the same I just described, let's say it will say like enterprise and look up and enterprise, get keys, and then I return the the proper documents. So this is what it identifies the operations ordered by a score and then build the schema from operation. That's what I call the thing. Once I know But remarkably, it's able to tackle these very huge schemas really well. You see, it's got its like... JDC, what are yours? So two questions. One, I have a question. and see what you thought on that. Right now, there is a repo for it. I think it's usable already there. What I'm trying to do now is just clean it up because I have this Frankenstein. that UL, I could use like hydropen AI or antropic. I do need to use antropic for the bigger ones. So if you just want to play with the CP, there won't be any trouble. So it's here. This schema, I know the query that generates, I can Let me show you this Yeah, I made just wait until you're done with your refact going to do the test. Awesome examples and maybe like two or three examples at like our totally out of scope and not feasible and then take some time to demo that to them and be set the conversation and hate like it's like how much this is there are like, you know, they're afraid of using a graph to algorithm it to structure endpoints and then dump them into some sort of a data explorer, right? Like if we were to create a webinar phase that presented the input, But for example, if we put some sort of like a graph graph on a type thing, whatever, on the third and third end there, and wondering like how much of this that how elegant that could be really, right? And it doesn't have to be perfect, necessarily, but at least a way that it can like infer the date that they are using here if they are just at least of all attributes and it has worked like pretty well. So query equipment, you see these one equipment. Yeah. I queries that already prepared that then they put into some dashboard or anything Because in the end you will need, I think you will need some expertise because sometimes it's unless you describe all the fields it will miss some fields or I think the way that this is currently used is when people want to create specific reports and specific interfaces for, again, key questions. Like give me this particular data for the data. I think that that's really how our customers are utilizing them to create these kind of custom reports. But like what if that didn't have to happen? And instead you could have that natural language query that maybe works for like the simpler or majority of requests. I'm sorry. Good for you since it's a graphana visualization of that, right? I instruct you like query places for this play name. So it did well. It got this play, but for any reason, here with that graphQL. So it says cannot query field places of type query. Did you mean place? You know the solution is places. So I don't know if there is some inconsistency in the graphQL, but that's another story like it got it that's like a post-process, for these very specific errors, right? Basically, it was all of them. Yeah, yeah. That's... I kind of see like more potential on some other... There are some efforts on it, so that's also worth mentioning maybe to the guys here. So, these people from Stanford tried to make basically this kind of thing. of the sales have the context window that it's your restriction. So what these guys did, they took a cheaper LLM and that is some Yamaha and fine-tune it on the instructions. I think because they tried to format everything as function calls and I think this thing has seen more or some function called, you know, there is this coma and things. GraphQL, it's kind of simpler. You know, just know the field. You might miss some fields and still the things that goes on. So it's kind of in between in the spectrum of of things like rest, but in the other end you have Jason, free running Jason and in between you have like rest and graphQL. So that's kind of the reasoning I also used to. to be a functionality to call to do this kind of thing. I think behind the scenes they are doing the same. So it doesn't one can just use the same thing and it will take advantage of the training that that they, yeah, okay. Yeah, and I know it would be great. Sorry. Sorry, go ahead. I was just going to say that I still really do passionately feel about talking to think I Q about. So after you refactor, I'll take it. I'll run with I'll back there. We could get some under payouts to build this and we could learn. So I think there's potential here. I really want to give it a shot. At least throw it at them and get there perspective and then you know, I can summarize those learnings for us to think about it. I'll run it on my end and try to copy some demos. I will say that you can just do that straight. Just let me know when I will put it into our working state with the. I'm just gonna put you all in next week. There is some... It says that it has an error, but it's not like a run-use. That's something I don't understand sometimes. GraphQL. Oh, instruct it. off and so on close out that I Yeah, so you can see here Rob Octifrecord also a response to the question of issues. So qualify as a situation. It's here. It's in the get rid of a 33-jump. You see it's. and it has inams and things like that are not like trivial. This is an inam. This is this kind of nesting pattern that it needs to use. So I feel very confident with the the thing, some more ideas. One idea is that this can form the basis of some assistant, local assistant, so you could instruct like, hey, open the file, browse to the window, like a smarter, that's It's an overlay with the other thing that we were speaking. I know if you remember, Sean, like this, how do you call that these URLs internal URLs? These people are moving in that direction. So it's basically the same thing like you see like they're proposing as martyr-barrel. You can basically do it's the same thing. Yeah, okay. So this is the most general use case like this text interface for many things. I will also recommend you guys to see this, this is the one. And this one is even easier because you know what was the latest like entity involved. So you can hint the model to use that like subgraph let's say. So Sananti team will there. So you can just like stuff the prompt like show me more details, but they are referencing these entities. So you know immediately what's the query that it's involved there and And it's remarkable that these guys, there is some place where it's shown that they are using open source model here. You see they disclose this. So there's a slide. Yeah. GPD or maybe at the start of ontology prompt and I have I was analyzing all of all of these screens that they disclose. Yeah. Remember the They also, they don't have this problem of what it can access or not. It's just like another user. So whatever that user has access to the LLM has access to. And this is the actions that it can take. They include one thing. called Slysofau. They so instead of getting this PII problem and adding to the model, they just put an input filter to whatever PII it's involved. They just got it. Then the model and then there is the valuation. So this is like a whole article. Keep that straight. You want to about the ask a question earlier? Uh, yes. So for the output of the GraphQL, are you using the, like, strange generation or whatever, or is it just opening this syntactically correct GraphQL like on its own? concrete schema. Now nowadays you can do that but I don't know if you can describe GraphQL. You can describe a JSON schema at least in OpenAI. Yeah, but valid queries for that schema, like can you force to open that query experience even without respect to the schema? Sorry, I kind of. we could just maybe use constraint generation to make sure that at least it opens. It's actually great. We're happy to out even if it doesn't know about the schema you know, but it would be even better to use the constraint generation to with knowledge of the schema. Yeah, definitely. I do things like let me show you I think using main I have a drag it's So I first heard Rory is that it doesn't speed like valid. GraphQL are all, but I've never encountered that. Maybe sometimes it's missing like a mutation or query, but the prompts that I'm using are already like covering for that. but in this format, something like that. So for that matter, I use a reggae, a general reggae for refuel queries or mutations. Oh, like an extract part of the result. Yeah, and then I do the validation. in that front. But yeah, you could like constrain. I think there are some efforts and I dare I read a debate of what was open AI really doing because because you can like feed it back and tell it hey, this is bad, generate again, that's one solution or maybe run it with many temperatures and select the valid ones, that's another solution that I've seen. And finally, in my ear and then you can like squash the the probability because each like in that layer you will have one one output per tokens so there will be some invalid tokens so you can fill some guy that did that okay. It's trained like only on valid graph QL maybe that would reduce the full capsize. You know what I mean? Yeah, yeah, that's that's kind of the of the of the approach. that we are having to select the tool. It's just the training that speeds that. Yeah, they find the model just for the but it has some like small errors. I mean I think I prefer to have a more way more powerful LLM rather than have that from what I've seen it's very easy just to use the the reggae, the function calls are like they kind of intuitive when you read them at a way, right? So yeah, I think like a foundation of model, it's like really helps in that situation. Yeah, I've seen that these models, we can look English to French and it's selected the French model. You see? So there should be, I don't know why I think this is an example in their showcase that they are being honest. It just used the model. If you go to the data, they mostly use French, so that's completely biased, it's like overfitted. So I kind of prefer the more general model than trying to, I don't know it's. these but for SQL and if you read the paper like very with a lot of attention they didn't gain much by fine tuning. You can find it here. fine tuning maybe. So Google train an SQL model, this palm, they are comparing it with others. So look the figures. The difference between fine-tune and and a few shots, 60, 7.4, bound with a few shot, so 77.3 to 78.3, 77.3 to 7.3, look. the stuff model to a fine tune model. It was just 1% so I really prefer to have the few shots. That's something that it's also emerging. So once you get the query, you can use also embeddings to get the most of this data is really. It's parts, but I do prefer to use that. And if you see, Langchain, Langchain, let me get Langchain Docs Langchain. Well, they change their docs, but let me show you some examples. or retreat. Well, they have this concept, you know, that on the basis of the prompt, they fetch the most relevant. Yeah, by that some point in my make sense to fine tune. I'm looking forward to bringing on my machine and kind of understanding a little bit deeper. And yeah, the exploring more use cases at the sounds really exciting as well. So this is basically for CPIT, it works really well because it's small and everything can be Is there a way of this and attempted to deploy this to help with SIP and all is that a feasible thing? I mean yes, because it's not private we can't really do that. Well at this point, just to comply with monadico. That's where Sean's ideas about how to like overcome the context when those sides and Local processing would avoid Maybe help if it's if it was female that's it sir. No, it's just using like like a privately host I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, information, right? We actually have a pre-processor to resolve what we're seeing on the screen right now. But yeah, there's still some struggles in the local model to get for figure out and get creative with. I do wonder if a summarization focuss could be useful for that token. Contact size prop as well. Like if something I could be applied to this scheme, get it fixed. But I'm going to stop recording. Yeah, I got that. And I appreciate everyone taking time of their day to come here and explore this and kind of help explore how to keep driving.", 'chunks': [{'timestamp': (0.0, 8.08), 'text': " that's because I'm looking. I just started the recording so we've got the Python, the WISJACS live real time happening and I can share my screen there and do so we have a letter context but then JDC"}, {'timestamp': (8.08, 15.52), 'text': " you can all. Okay, great. That's working. And then, um, they see, you said you can't record the meeting. To start recording. Yeah, I'm going to close my room. I don't have the record about this."}, {'timestamp': (15.52, 24.52), 'text': ' we have the real time transcription because Google needs that for the audio plus the transcript plus the timestamps. So cool. Okay, so we can have our discussion as well.'}, {'timestamp': (24.52, 27.52), 'text': " I'm planning to begin ask questions or how do you want to go about this? Yeah, so let me make a quick introduction. So what I, it all started with the demo from, uh, uh, Polyantier."}, {'timestamp': (27.52, 39.519999999999996), 'text': ' So as you know, in the video, so my guess by looking into the videos or I was amazed how much they could achieve just with using like open source simple models. So I still have a lot of experience.'}, {'timestamp': (39.519999999999996, 47.12), 'text': ' start to, it means that they have a definition of entities and actions that they can take. So then they control this to an LLM. So I started exploring with that with Jammo trying to'}, {'timestamp': (47.12, 55.12), 'text': " I'm a bit kind of work. So in the NID side, that one good pattern or one good way to achieve this was using GraphQL. So GraphQL is this query language that describes operation mutations and queries."}, {'timestamp': (55.12, 61.839999999999996), 'text': " over data and know how to operate. So that means that we shouldn't need to find tune on this. So it's a similar situation of how it works for SQL queries. So people from OpenAI, I am many companies are"}, {'timestamp': (61.839999999999996, 76.84), 'text': " I mean, that we could map to this, it's Cp because it's some actions that you need to take and you want to out of our human language query, you want to know the parameters of a function to call and which function to."}, {'timestamp': (76.84, 83.84), 'text': " I can take you over and it's amazing that Corey, Meahaw and Chonar at the NIMIKOS. I'm reaching to the point that I would love to have more ideas."}, {'timestamp': (83.84, 91.84), 'text': ' So I started by like mocking this kind of entities that could work with ZB. So we have like employees, candidates. I just took a look at out of the comments that we support right now. Reminders.'}, {'timestamp': (91.84, 93.84), 'text': " Let me see maybe I'm just sharing One another whole screen. Okay, I think this will solve it So let me go back so you know Cp it's this like common"}, {'timestamp': (93.84, 95.60000000000001), 'text': " line, we will use it. So there are some entities that are employees. I just like mocked some of them. There is a concept of candidate for the interviews. There is also like reminders. And there are some queries that you can take. So let's say the get-in-play."}, {'timestamp': (95.60000000000001, 101.88000000000001), 'text': " this is part of the first test that I did. So let me show you here how this can work. So let's say in the first day of the library memory to call how to be."}, {'timestamp': (101.88000000000001, 106.24000000000001), 'text': " into, sorry, I was in the middle of some, some, eights here. Ten more gods. They always show up. Yeah, yeah. I'm, I'm refructuring like, have a little bit of these, but you see."}, {'timestamp': (106.24000000000001, 133.10000000000002), 'text': ' kong ang'}, {'timestamp': (133.10000000000002, 143.18000000000004), 'text': " to call it. So it works very well for a bunch of cases, you know. So it can map like many queries. So let's say vacations."}, {'timestamp': (143.18000000000004, 145.18000000000004), 'text': ' Thanks.'}, {'timestamp': (145.18000000000004, 147.18000000000004), 'text': ' Please. Well, you see, if you...'}, {'timestamp': (147.18000000000004, 153.18000000000004), 'text': " So you see, well, it's very, it's very good for this kind of thing. So it creates the occasion. It feels all the things here. So this can basically be adapted to any like."}, {'timestamp': (153.18000000000004, 157.98000000000005), 'text': ' like, sick cases, such as the one of automating a CLI. So that means adding a common line interface for many things, as long as you can describe the things like this. And this is a very like, like, flexing.'}, {'timestamp': (157.98000000000005, 161.98000000000005), 'text': " So that was the first thing I did, but this has a challenge. What's the challenge that I need to fit this into the LLM? So LLM's have a context window, so you cannot put more text data than it's a program too."}, {'timestamp': (161.98000000000005, 164.98000000000005), 'text': " the range, let's say word tokens. You know, so if we want to use this for bigger things, we need to come up with an strategy. So that's what I've been doing. So what I did is"}, {'timestamp': (164.98000000000005, 170.92000000000004), 'text': " This I create like a sub tree because this this thing generates us a tree what I mean by this So let's say I know this is a mutation that I need to to to use so there are two things reference here to to to type"}, {'timestamp': (170.92000000000004, 181.92000000000004), 'text': " There is a, I come to employee here and they're not further like references. So, this case is very simple, but there are some others that are there are more complex. So, I've been implementing strategies for this. So, one strategy."}, {'timestamp': (181.92000000000004, 196.92000000000004), 'text': ' So I can use that to create embeddings and then identify or just use that strategy of the top k to know which one should I should include.'}, {'timestamp': (196.92000000000004, 201.68000000000004), 'text': ' create the strategy where I directly ask the LLM. So instead of using embeddings, I ask the LLM as if they were tools that is the thing that is now like very popular. So I ask the LLM. This is the query I have. This is the tools that I have to accomplish that with their own.'}, {'timestamp': (201.68000000000004, 207.68000000000004), 'text': ' on the screen. I create again this, this like sub tree of the graph QL and then I can obtain the query. So that was a challenge because even that strategy with using the other lem, in the case of of says me, concretely that.'}, {'timestamp': (207.68000000000004, 210.64000000000004), 'text': " It's number of queries is so big. Unutations, that it doesn't fit. So I need to come up with the embedding strategy. So there, you usually, I mean, you can select strategies. So let's say,"}, {'timestamp': (210.64000000000004, 218.64000000000004), 'text': " So this is the query I'm giving for the Rails organization get their Rails repo on the last full request with status open. This is the graphQL that I'm passing. The schema strategy I'm using embeddings. So I'm just creating them bedding."}, {'timestamp': (218.64000000000004, 220.64000000000004), 'text': " and Did it in our way and stable state right now? Let me try to uncommon some things that I'm I"}, {'timestamp': (220.64000000000004, 222.64000000000004), 'text': " Of course. Yeah, it's very funny to use the LOM strategy here. So did you actually can take a while?"}, {'timestamp': (222.64000000000004, 229.64000000000004), 'text': ' and just getting the parts is too big but with the anthropic LLM I can make it it does fit. So, how many tokens is there if I mean like... How many tokens is there?'}, {'timestamp': (229.64000000000004, 237.64000000000004), 'text': ' But to be able to fit that to the LLM, you need an strategy to tell the LLM which parts it needs to use. So there are two strategies that I implemented. One is using embeddings. So I grab the with.'}, {'timestamp': (237.64000000000004, 246.20000000000005), 'text': " book and answer this. So I cannot definitely fit a GitHub GraphQL because look how big it is, not even in the in the Vigger ones, in the Vigger and Tropic. So it's 50."}, {'timestamp': (246.20000000000005, 249.56000000000006), 'text': ' So even the 100k window of this LLM. So I just need to pick the entities that are relevant with whatever strategy. Right now there are two like LLMs and embeddings. So I managed to get it that thing'}, {'timestamp': (249.56000000000006, 255.56000000000006), 'text': " queries are they all, you know, do they have rich comments like that? Like most of of big APIs do have that. Okay. And that's a requirement."}, {'timestamp': (255.56000000000006, 262.2800000000001), 'text': " And because if you got if you got rid of the comments than the LM would not maybe note as well. Yeah, as well. But like I guess there's a trade off rate like maybe you could eliminate the comments and get a better result."}, {'timestamp': (262.2800000000001, 269.0800000000001), 'text': " because not like foretell that. So I've implemented some like strategies. So for instance, I have one where I can just fit in the like the tools that I have available. And then when I include"}, {'timestamp': (269.0800000000001, 274.2800000000001), 'text': " the graph goes reasons but because it just doesn't fit but yeah all of those combinations are available I can throw a control in a graph QL without comments and it might work if if this variable names are self descriptive as you know as you have seen these"}, {'timestamp': (274.2800000000001, 284.2800000000001), 'text': ' one of these things work. So there is this explorer from see. So I ask this get'}, {'timestamp': (284.2800000000001, 286.2800000000001), 'text': " They're rosfick. Just by. And it's just really complex. That's the output that you got from your script right there. And this is the output. These one on the left hand side or the one that is highlighted."}, {'timestamp': (286.2800000000001, 292.2800000000001), 'text': " that's what I got and then he got the stars and then you can. So that's let me try another one. Yeah, and maybe I could share a use case that where we were talking about, okay, see."}, {'timestamp': (292.2800000000001, 296.68000000000006), 'text': " building. They're really pushing tons of small meetings as I mean, if I just deliverage these open data standards to collect data for like incompatible appliances, like let's say Mixers on a manufacturing line from different manufacturing."}, {'timestamp': (296.68000000000006, 303.68000000000006), 'text': ' actors to do that with just their knowledge. So creating the open data standards is a chunk of the work. Then improving the gooey interface that they have to then create these graph QL queries is another and I think where'}, {'timestamp': (303.68000000000006, 306.68000000000006), 'text': " We can specific appliances and data elements. How do we take that as that further, basically use language or jdc's describing here to say, hey, I want to get a historian graph for all of the mixers that we had on the line."}, {'timestamp': (306.68000000000006, 311.68000000000006), 'text': " And that would put that specific data for them, right? That's kind of the idea we're going to explore at least. So let me show you, I already implemented SESME. So let me show you this. So what I did,"}, {'timestamp': (311.68000000000006, 316.9000000000001), 'text': " I went to describing. So they have this query that it's not an easy one. So return a list of times here is sample values for a given instance, that we'll specify time range. So I did, I further described this."}, {'timestamp': (316.9000000000001, 320.5000000000001), 'text': " because it's within the start time blah blah blah and time. Take a look at this query and it's exactly just a couple of more things just the first. But it's basically got it right. So get raw data with."}, {'timestamp': (320.5000000000001, 327.5000000000001), 'text': " I don't know if necessarily the users of this would be very clear about the information they need and have an awareness of that. I understand most engineers in manufacturing spaces are familiar with that type of notation."}, {'timestamp': (327.5000000000001, 333.8200000000001), 'text': " but I do say they select or I mean it's a ghost thing but let's say I would say I meet night and I think this is a new. But I think if you manage to squeeze a"}, {'timestamp': (333.8200000000001, 348.8200000000001), 'text': " Yeah, okay. There we go. Well, it's very fast, too. That's yeah, because I'm using on tropic and I'm managed to use a cheaper model. That's another story of like tweaking the prompt. So I could like first."}, {'timestamp': (348.8200000000001, 352.0600000000001), 'text': " This is quick, but right now it's like the embellings. It's using like embelling from OpenAI and getting the tools that I need to use. And then when I get the graph QL, I use it in Anthropic."}, {'timestamp': (352.0600000000001, 362.22000000000014), 'text': " 15,000 yeah yeah 15,000 just let's say like 30k 30k tokens making our shift so that goes into every I guess prompt has to have all 30,000"}, {'timestamp': (362.22000000000014, 364.22000000000014), 'text': " I don't get the quick. What I do too. I can give as much as 100K tokens as context that it's the prom blues all the day that that include."}, {'timestamp': (364.22000000000014, 368.22000000000014), 'text': " And just the parts that are really one to the query. And we've done some research. Yeah, and I guess my question is if you are passing the full 15,000 lines with every query or every"}, {'timestamp': (368.22000000000014, 375.22000000000014), 'text': " But it's less than that because the whole idea, I mean, once let me, let me explain you better. So the strategy that it's the embeddings was able to identify that I need to use this, these two fields. I see. Okay."}, {'timestamp': (375.22000000000014, 382.22000000000014), 'text': ' and the'}, {'timestamp': (382.22000000000014, 394.22000000000014), 'text': ' So, this query further references this filter, that includes a bunch of other fields.'}, {'timestamp': (394.22000000000014, 400.22000000000014), 'text': " You get the point like it's these things like reference other types so you need to include them like all Yeah, yeah, you do like an LLM kind of tree shaking To minimize yeah, yeah, so first I identify which mutations or queries I will need"}, {'timestamp': (400.22000000000014, 403.22000000000014), 'text': " And I'll include just the things that I will know. I will do the same thing. And all the dependencies. Yeah, exactly. Yeah. I'm doing this. Yeah. I was wondering what do you embed exactly? Is it only the human language?"}, {'timestamp': (403.22000000000014, 418.02000000000015), 'text': " which this is the the the this is the graphQL prompt that I use for anthropic so it says I will give you the following graphQL schema and I include the this smaller schema that I'm able to."}, {'timestamp': (418.02000000000015, 426.02000000000015), 'text': " figure and it's able to figure it out. Like these rules, the context they time and that's it. Yeah, but on the open AI site, when you generate the embeddings to to"}, {'timestamp': (426.02000000000015, 431.70000000000016), 'text': " find chemist strategies just like an interface. So full schema, whatever schema I'm pass I return the whole schema. LLM strategy. So it it works like this. Get two descriptions by operation. So what it does."}, {'timestamp': (431.70000000000016, 434.94000000000017), 'text': " guys, let's say I get this, just this part, let's just code of conduct and then I put two dots and say like this, look up the code of conduct by ski. So I pass all of this in this format. You see here that key value?"}, {'timestamp': (434.94000000000017, 436.94000000000017), 'text': " That's the LLM strategy. Does it incorporate the comment on that type name as well? Because it's like the whole tree right? Yeah. And this gets plugged into these tools from your health laging that they're"}, {'timestamp': (436.94000000000017, 446.14000000000016), 'text': " me al-Qoma. So that's the LLM strategy. The the other strategy is simpler and it's embedding strategy. So for the embedding strategy I basically construct these document."}, {'timestamp': (446.14000000000016, 453.14000000000016), 'text': " the LLM strategy do you run the LLM once for every single field of every type in the source schema? No, no, no, it's not I chunk I I chunk this like code of condo to dot"}, {'timestamp': (453.14000000000016, 458.1800000000002), 'text': " look at times that can like be even bigger than the context window, but I'm not talking that much, but that's not like very hard to do. It's just like have a buffer, yeah. Yeah, margin ever. Yeah. And"}, {'timestamp': (458.1800000000002, 462.94000000000017), 'text': " limitations and it still it works. It doesn't work for Sesame so that's why I wrote embeddings Because embeddings, you know, I just create embedding make the query and Come up with the tools. Let's say the tools are the queries and mutations that I will need"}, {'timestamp': (462.94000000000017, 470.94000000000017), 'text': " get queries, a mutation, descriptions, and then I create this document using the jamaindex capability. So here is basically the same I just described, let's say it will say like enterprise and look up and"}, {'timestamp': (470.94000000000017, 477.94000000000017), 'text': " enterprise, get keys, and then I return the the proper documents. So this is what it identifies the operations ordered by a score and then build the schema from operation. That's what I call the thing. Once I know"}, {'timestamp': (477.94000000000017, 484.94000000000017), 'text': " But remarkably, it's able to tackle these very huge schemas really well. You see, it's got its like... JDC, what are yours? So two questions. One, I have a question."}, {'timestamp': (484.94000000000017, 487.94000000000017), 'text': " and see what you thought on that. Right now, there is a repo for it. I think it's usable already there. What I'm trying to do now is just clean it up because I have this Frankenstein."}, {'timestamp': (487.94000000000017, 494.90000000000015), 'text': " that UL, I could use like hydropen AI or antropic. I do need to use antropic for the bigger ones. So if you just want to play with the CP, there won't be any trouble. So it's here."}, {'timestamp': (494.90000000000015, 501.90000000000015), 'text': " This schema, I know the query that generates, I can Let me show you this Yeah, I made just wait until you're done with your refact going to do"}, {'timestamp': (501.90000000000015, 507.44000000000017), 'text': " the test. Awesome examples and maybe like two or three examples at like our totally out of scope and not feasible and then take some time to demo that to them and be set the conversation and hate like it's like"}, {'timestamp': (507.44000000000017, 509.26000000000016), 'text': " how much this is there are like, you know, they're afraid of using a graph to algorithm it to structure endpoints and then dump them into some sort of a data explorer, right? Like if we were to create a webinar phase that presented the input,"}, {'timestamp': (509.26000000000016, 513.9000000000002), 'text': " But for example, if we put some sort of like a graph graph on a type thing, whatever, on the third and third end there, and wondering like how much of this that how elegant that could be really, right? And it doesn't have to be perfect, necessarily, but at least"}, {'timestamp': (513.9000000000002, 518.4200000000002), 'text': ' a way that it can like infer the date that they are using here if they are just at least of all attributes and it has worked like pretty well. So query equipment, you see these one equipment. Yeah.'}, {'timestamp': (518.4200000000002, 523.4200000000002), 'text': " I queries that already prepared that then they put into some dashboard or anything Because in the end you will need, I think you will need some expertise because sometimes it's unless you describe all the fields it will miss some fields or"}, {'timestamp': (523.4200000000002, 530.4200000000002), 'text': ' I think the way that this is currently used is when people want to create specific reports and specific interfaces for, again, key questions. Like give me this particular data for the data.'}, {'timestamp': (530.4200000000002, 536.4200000000002), 'text': " I think that that's really how our customers are utilizing them to create these kind of custom reports. But like what if that didn't have to happen? And instead you could have that natural language query that maybe works for like the simpler or majority of requests."}, {'timestamp': (536.4200000000002, 538.4200000000002), 'text': " I'm sorry. Good for you since it's a graphana visualization of that, right? I instruct you like query places for this play name. So it did well. It got this play, but for any reason,"}, {'timestamp': (538.4200000000002, 543.9400000000002), 'text': " here with that graphQL. So it says cannot query field places of type query. Did you mean place? You know the solution is places. So I don't know if there is some inconsistency in the graphQL, but that's another story like it got it that's"}, {'timestamp': (543.9400000000002, 547.9400000000002), 'text': " like a post-process, for these very specific errors, right? Basically, it was all of them. Yeah, yeah. That's... I kind of see like more potential on some other..."}, {'timestamp': (547.9400000000002, 554.9400000000002), 'text': " There are some efforts on it, so that's also worth mentioning maybe to the guys here. So, these people from Stanford tried to make basically this kind of thing."}, {'timestamp': (554.9400000000002, 561.5800000000002), 'text': " of the sales have the context window that it's your restriction. So what these guys did, they took a cheaper LLM and that is some Yamaha and fine-tune it on the instructions."}, {'timestamp': (561.5800000000002, 576.5800000000002), 'text': ' I think because they tried to format everything as function calls and I think this thing has seen more'}, {'timestamp': (576.5800000000002, 579.9400000000002), 'text': " or some function called, you know, there is this coma and things. GraphQL, it's kind of simpler. You know, just know the field. You might miss some fields and still the things that goes on. So it's kind of in between in the spectrum of"}, {'timestamp': (579.9400000000002, 588.6400000000002), 'text': " of things like rest, but in the other end you have Jason, free running Jason and in between you have like rest and graphQL. So that's kind of the reasoning I also used to."}, {'timestamp': (588.6400000000002, 593.4000000000002), 'text': " to be a functionality to call to do this kind of thing. I think behind the scenes they are doing the same. So it doesn't one can just use the same thing and it will take advantage of the training that"}, {'timestamp': (593.4000000000002, 605.4000000000002), 'text': " that they, yeah, okay. Yeah, and I know it would be great. Sorry. Sorry, go ahead. I was just going to say that I still really do passionately feel about talking to think I Q about. So after you refactor, I'll take it. I'll run with"}, {'timestamp': (605.4000000000002, 609.4000000000002), 'text': " I'll back there. We could get some under payouts to build this and we could learn. So I think there's potential here. I really want to give it a shot. At least throw it at them and get there perspective and then you know, I can summarize those learnings for us to think about it."}, {'timestamp': (609.4000000000002, 612.4000000000002), 'text': " I'll run it on my end and try to copy some demos. I will say that you can just do that straight. Just let me know when I will put it into our working state with the."}, {'timestamp': (612.4000000000002, 614.4000000000002), 'text': " I'm just gonna put you all in next week. There is some... It says that it has an error, but it's not like a run-use. That's something I don't understand sometimes. GraphQL. Oh, instruct it."}, {'timestamp': (614.4000000000002, 616.4000000000002), 'text': ' off and so on close out that I Yeah, so you can see here Rob Octifrecord'}, {'timestamp': (616.4000000000002, 622.4000000000002), 'text': " also a response to the question of issues. So qualify as a situation. It's here. It's in the get rid of a 33-jump. You see it's."}, {'timestamp': (622.4000000000002, 627.0000000000002), 'text': ' and it has inams and things like that are not like trivial. This is an inam. This is this kind of nesting pattern that it needs to use. So I feel very confident with the'}, {'timestamp': (627.0000000000002, 634.6000000000003), 'text': " the thing, some more ideas. One idea is that this can form the basis of some assistant, local assistant, so you could instruct like, hey, open the file, browse to the window, like a smarter, that's"}, {'timestamp': (634.6000000000003, 637.2800000000002), 'text': " It's an overlay with the other thing that we were speaking. I know if you remember, Sean, like this, how do you call that these URLs internal URLs? These people are moving in that direction."}, {'timestamp': (637.2800000000002, 643.8000000000002), 'text': " So it's basically the same thing like you see like they're proposing as martyr-barrel. You can basically do it's the same thing. Yeah, okay."}, {'timestamp': (643.8000000000002, 654.8000000000002), 'text': ' So this is the most general use case like this text interface for many things. I will also recommend you guys to see this, this is the one.'}, {'timestamp': (654.8000000000002, 661.8000000000002), 'text': " And this one is even easier because you know what was the latest like entity involved. So you can hint the model to use that like subgraph let's say."}, {'timestamp': (661.8000000000002, 677.8000000000002), 'text': " So Sananti team will there. So you can just like stuff the prompt like show me more details, but they are referencing these entities. So you know immediately what's the query that it's involved there and"}, {'timestamp': (677.8000000000002, 688.8400000000001), 'text': " And it's remarkable that these guys, there is some place where it's shown that they are using open source model here. You see they disclose this. So there's a slide. Yeah."}, {'timestamp': (688.8400000000001, 695.5600000000002), 'text': ' GPD or maybe at the start of ontology prompt and I have I was analyzing all of all of these screens that they disclose. Yeah. Remember the'}, {'timestamp': (695.5600000000002, 701.3200000000002), 'text': " They also, they don't have this problem of what it can access or not. It's just like another user. So whatever that user has access to the LLM has access to. And this is the actions that it can take. They include one thing."}, {'timestamp': (701.3200000000002, 716.3200000000002), 'text': " called Slysofau. They so instead of getting this PII problem and adding to the model, they just put an input filter to whatever PII it's involved. They just got it. Then the model and then there is the valuation. So this is like a whole article."}, {'timestamp': (716.3200000000002, 717.3200000000002), 'text': ' Keep that straight. You want to about the ask a question earlier? Uh, yes. So for the output of the GraphQL, are you using the, like, strange generation or whatever, or is it just opening this syntactically correct GraphQL like on its own?'}, {'timestamp': (717.3200000000002, 723.3200000000002), 'text': " concrete schema. Now nowadays you can do that but I don't know if you can describe GraphQL. You can describe a JSON schema at least in OpenAI."}, {'timestamp': (723.3200000000002, 735.1200000000001), 'text': ' Yeah, but valid queries for that schema, like can you force to open that query experience even without respect to the schema? Sorry, I kind of.'}, {'timestamp': (735.1200000000001, 739.2800000000001), 'text': " we could just maybe use constraint generation to make sure that at least it opens. It's actually great. We're happy to out even if it doesn't know about the schema you know, but it would be even better to use the constraint generation to with knowledge of the schema. Yeah, definitely."}, {'timestamp': (739.2800000000001, 752.5200000000001), 'text': " I do things like let me show you I think using main I have a drag it's"}, {'timestamp': (752.5200000000001, 756.3600000000001), 'text': " So I first heard Rory is that it doesn't speed like valid. GraphQL are all, but I've never encountered that. Maybe sometimes it's missing like a mutation or query, but the prompts that I'm using are already like covering for that."}, {'timestamp': (756.3600000000001, 762.3600000000001), 'text': ' but in this format, something like that. So for that matter, I use a reggae, a general reggae for refuel queries or mutations. Oh, like an extract part of the result. Yeah, and then I do the validation.'}, {'timestamp': (762.3600000000001, 771.1600000000001), 'text': ' in that front. But yeah, you could like constrain. I think there are some efforts and I dare I read a debate of what was open AI really doing because'}, {'timestamp': (771.1600000000001, 773.6000000000001), 'text': " because you can like feed it back and tell it hey, this is bad, generate again, that's one solution or maybe run it with many temperatures and select the valid ones, that's another solution that I've seen. And finally,"}, {'timestamp': (773.6000000000001, 780.0800000000002), 'text': ' in my ear and then you can like squash the the probability because each like in that layer you will have one one output per tokens so there will be some invalid tokens so you can fill some guy that did that okay.'}, {'timestamp': (780.0800000000002, 789.0800000000002), 'text': " It's trained like only on valid graph QL maybe that would reduce the full capsize. You know what I mean? Yeah, yeah, that's that's kind of the of the of the approach."}, {'timestamp': (789.0800000000002, 794.6000000000001), 'text': " that we are having to select the tool. It's just the training that speeds that. Yeah, they find the model just for the"}, {'timestamp': (794.6000000000001, 802.0800000000002), 'text': " but it has some like small errors. I mean I think I prefer to have a more way more powerful LLM rather than have that from what I've seen it's very easy just to use the"}, {'timestamp': (802.0800000000002, 806.6400000000001), 'text': " the reggae, the function calls are like they kind of intuitive when you read them at a way, right? So yeah, I think like a foundation of model, it's like really helps in that situation. Yeah, I've seen that these models,"}, {'timestamp': (806.6400000000001, 812.0800000000002), 'text': " we can look English to French and it's selected the French model. You see? So there should be, I don't know why I think this is an example in their showcase that they are being honest."}, {'timestamp': (812.0800000000002, 818.0000000000001), 'text': " It just used the model. If you go to the data, they mostly use French, so that's completely biased, it's like overfitted. So I kind of prefer the more general model than trying to, I don't know it's."}, {'timestamp': (818.0000000000001, 825.0000000000001), 'text': " these but for SQL and if you read the paper like very with a lot of attention they didn't gain much by fine tuning. You can find it here. fine tuning maybe."}, {'timestamp': (825.0000000000001, 834.0000000000001), 'text': ' So Google train an SQL model, this palm, they are comparing it with others. So look the figures. The difference between fine-tune and'}, {'timestamp': (834.0000000000001, 839.1200000000001), 'text': ' and a few shots, 60, 7.4, bound with a few shot, so 77.3 to 78.3, 77.3 to 7.3, look.'}, {'timestamp': (839.1200000000001, 846.0800000000002), 'text': " the stuff model to a fine tune model. It was just 1% so I really prefer to have the few shots. That's something that it's also emerging. So once you get the query, you can use also embeddings to get"}, {'timestamp': (846.0800000000002, 848.0800000000002), 'text': " the most of this data is really. It's parts, but I do prefer to use that. And if you see, Langchain, Langchain, let me get Langchain Docs Langchain."}, {'timestamp': (848.0800000000002, 855.0800000000002), 'text': ' Well, they change their docs, but let me show you some examples.'}, {'timestamp': (855.0800000000002, 868.4400000000002), 'text': ' or retreat. Well, they have this concept, you know, that on the basis of the prompt, they fetch the most relevant.'}, {'timestamp': (868.4400000000002, 873.6400000000002), 'text': ' Yeah, by that some point in my make sense to fine tune.'}, {'timestamp': (873.6400000000002, 878.6400000000002), 'text': " I'm looking forward to bringing on my machine and kind of understanding a little bit deeper. And yeah, the exploring more use cases at the sounds really exciting as well."}, {'timestamp': (878.6400000000002, 892.1600000000002), 'text': " So this is basically for CPIT, it works really well because it's small and everything can be"}, {'timestamp': (892.1600000000002, 899.1600000000002), 'text': " Is there a way of this and attempted to deploy this to help with SIP and all is that a feasible thing? I mean yes, because it's not private we can't really do that. Well at this point, just to comply with monadico."}, {'timestamp': (899.1600000000002, 904.2000000000002), 'text': " That's where Sean's ideas about how to like overcome the context when those sides and Local processing would avoid Maybe help if it's if it was female that's it sir. No, it's just using like like a privately host"}, {'timestamp': (904.2000000000002, 919.2000000000002), 'text': ' I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will, I will,'}, {'timestamp': (919.2000000000002, 925.7600000000001), 'text': " information, right? We actually have a pre-processor to resolve what we're seeing on the screen right now. But yeah, there's still some struggles in the local model to get for figure out and get creative with."}, {'timestamp': (925.7600000000001, 931.7600000000001), 'text': ' I do wonder if a summarization focuss could be useful for that token. Contact size prop as well. Like if something I could be applied to this scheme, get it fixed.'}, {'timestamp': (931.7600000000001, 933.7600000000001), 'text': " But I'm going to stop recording. Yeah, I got that. And I appreciate everyone taking time of their day to come here and explore this and kind of help explore how to keep driving."}]}