Running your own local AI.

Gemini:
1710256443161.png

Faraday
1710256510247.png


You have to remember when you're using the AI integration in google it isn't building a conversation but giving you a one-off answer. When I do it in Faraday or Gemini, it's part of a conversation so it remembers it's previous answers (up to a limit) and tries to vary it's responses. Note how Gemini tried to include emotional context for it's picks.
 
You have to remember when you're using the AI integration in google it isn't building a conversation but giving you a one-off answer.
Then does the AI think we are blubbering idiots with the short term memory of a goldfish for asking the same question over and over?
 
Then does the AI think we are blubbering idiots with the short term memory of a goldfish for asking the same question over and over?
Each question in the little browser AI plugin is essentially a new conversation. So in essence... yes.
 
There is the answer I sought. AI does not provide reliably repeatable output when using the same inputs. I feared as much that the AI creators would incorporate anthropomorphism in an effort to provide a plausible Touring test chat bot.

The last example I provided, "give me a letter of the Greek alphabet or a color", was intended to see if, when given a choice, whether the AI would default to the first of a string of exclusive choices or to the last.

Historically we've used computers primarily because they've shown themselves capable of reliably repeatable output given the exact same input - they're machines. A person could trust the solution provided the internal process was understood and wasn't an unknown black box.

Now ... AI's process has become an unknown big black box and edges toward being untrustworthy simply because it has been purposefully made incapable of reliably repeatable output.
 
Now ... AI's process has become an unknown big black box and edges toward being untrustworthy simply because it has been purposefully made incapable of reliably repeatable output.
I'm expecting them to fix this - and eventually when asked the same question repeatedly, to remind the requestor of such ("I already answered that one!", etc.).

The draw for me is for such models to ingest a defined dataset and be able to answer questions as well as produce derivative work. I don't expect public models to be very effective very quickly, but in the private space with privileged datasets and processes, honestly I understand the enthusiasm.

Of course, I have no expectation for them to become useful to me personally or professionally, if I have to run it myself. May be worth doing just for learning though.
 
I'm expecting them to fix this - and eventually when asked the same question repeatedly, to remind the requestor of such ("I already answered that one!", etc.).

The draw for me is for such models to ingest a defined dataset and be able to answer questions as well as produce derivative work. I don't expect public models to be very effective very quickly, but in the private space with privileged datasets and processes, honestly I understand the enthusiasm.

Of course, I have no expectation for them to become useful to me personally or professionally, if I have to run it myself. May be worth doing just for learning though.

The below is written with the metaphorical "you" ...

When you ask an AI to analyze a data set, you expect to get a reliable result based on specified input parameters.

If you ask the AI to analyze the same data set two days later using the exact same input parameters, you expect to get the same exact result. Why you might do that is irrelevant to the task.

An imperfect analogy would be if you gave an AI-enhanced web browser a URL to a favorite website or discussion forum - you expect to be taken to the website regardless whether it was 6am or 9pm, on a Monday or Saturday, even though the contents of the website might have changed since the last visit. However, it would be a complete and utter disappointment if the AI then told you "I've already shown you that site, try this one instead ..." Note this is imperfect since it breaks from the previous examples which were open ended requests; this one is a specific request.

[edit to add this last thought ... if you can't get a reliable output from a machine made for the purpose of analyzing data sets, then you might just as well show the data set to the guy sitting on the next bar stool and see what he thinks.]
 
Last edited:
However, it would be a complete and utter disappointment if the AI then told you "I've already shown you that site, try this one instead ..." Note this is imperfect since it breaks from the previous examples which were open ended requests; this one is a specific request.
I'd accept this answer so long as the answer that was expected was also returned and labeled as such. In fact, that's what I'd prefer; if I'm asking the same question and expecting the same answer repeatedly, then there's either a problem with the answer or there is a process that can be optimized.

Not arguing though, just adding to the thought.
 
I see where you're coming from Ditchinit, it seems you're wanting GIGO and RGOG. Garbage In Garbage out and Request Given Output Given to be effective.

If you ask for the state of the database you expect for the state of the database. If you ask why two plus 5 equals three you expect to get an error, not a made up explanation as to why 2 plus 5 equals three or something else.

AI is a salesman trying to tell the end user what they want to hear at all times. Based on it's knowledge base and creativity. You can adjust how 'creative' an AI can be. You can tell it to be ultimately factual within limits. So that's a sliding scale.

1710339683743.png

These are the settings in Faraday that I am thinking of when discussing the above.
 
Asking for a random number, or favorite color, is a lot different than asking what 2+2 is.

The first, you expect to get different results.
The second, it's possible to get different results, but you don't necessarily expect it
The last, you better not get different results.

So there is some context needed when talking about repeatability when asking questions.
 
I see where you're coming from Ditchinit, it seems you're wanting GIGO and RGOG. Garbage In Garbage out and Request Given Output Given to be effective.

If you ask for the state of the database you expect for the state of the database.

Not necessarily, and yes. I use computers to help me make decisions; I program the logic routines so I know what goes on inside the black box (or use software from proven, reliable sources). I do not think it wise to let computers make decisions on their own without all potential consequences being fully vetted. I'm not against AI. My concerns regarding AI are with the human side of the equation - that humans will abdicate critical thinking and decisions to AI because that's the easy thing to do - pure laziness stemming from ignorance and apathy.

AI is a salesman trying to tell the end user what they want to hear at all times. Based on it's knowledge base and creativity. You can adjust how 'creative' an AI can be. You can tell it to be ultimately factual within limits. So that's a sliding scale.

This is a case in point of why indiscriminate AI use concerns me. If you want a sycophantic echo chamber, you have plenty of social media options to choose from. Computers are tools - not anthropomorphic friends.

The best (and clearly not the only) way to use AI is apply it to complex data sets having interdependent variables otherwise too convoluted for humans to fully understand and to reliably identify hidden correlations - regardless whether they're desirable - so that humans can then make informed decisions or take actions based on those results after considering and/or verifying the output validity.

YMMV
 
Become a Patron!
Back
Top