Running your own local AI.

Grimlakin

Forum Posting Supreme
Joined
Jun 24, 2019
Messages
8,461
Points
113
I did this in windows largely following the instructions I found here:
https://www.howtogeek.com/881317/how-to-run-a-chatgpt-like-ai-on-your-own-pc/

For the last bit when it has you use a command line to download the actual AI engine... or whatever you want to call it. It fails. If you go into Docker expand the project you can launch a web interface and download the AI engine of your choice.

Be warned... when they say this takes a LOT of CPU and a LOT of Ram they are not effing about.

I run the mid tier 13b engine and it easily can max out my 5900x if I let it have the threads. That and the Memory load pretty much eats all it can without pushing the OS over.

It was fun to setup and now know I have a built in integrated Linux VM in my windows OS.

To be fair I'm running windows 11 pro. I would suggest the same for following this path.

Now I am looking at going to 64 gigs of ram by adding another 32 gig kit to the one I have already. Price isn't bad either today.
 
What exactly can you do with this AI? I mean, is it basically like running your own ChatGPT with an interface where you can interact with it, ask it questions, etc...

I got a 128 core EPYC with 256GB of RAM I might play with over the summer while it's idling. lol

Does it only use CPU or does it have GPU capabilities also? The rig has 5x P100s in it. Alternatively, I could throw it in my 64 core EPYC setup with 6x P100s or my 48 core EPYC with 6x 3070s. I wonder which would be better suited for the task assuming it can utilize the GPUs?

edit
Seems it can't utilize the GPU. So the 256 thread EPYC is what I will install this on and play, but I intend to try out the bigger version aka 30b.
Seems their is an AI that can utilize the GPU, but it seems to be based on the 7b version which according to what I read is a lot dumber than the 13b and 30b versions.
 
Last edited:
Yea this one is pretty simple by comparison. I was surprised by the flexibly of the mid tier version. Would love to know your impressions of the max tier version.

What I'm trying to find now is a AI you can point to a site base URL to train.

Because if I can build a porn chat bot trained off of literotica.com to make money with... I'll be a millionaire in months.. lol.
 
I wonder if you could connect it to the Internet and tell it to "learn everything" and it learn everything it doesn't already know from the Interwebs. Definitely not dangerous, right?
 
I wonder if you could connect it to the Internet and tell it to "learn everything" and it learn everything it doesn't already know from the Interwebs. Definitely not dangerous, right?
That is part of the whole responsible use. An ai as we have them today can ingest information but it has no capability to jusdge if a piece of information is correct. That's why you need to have the ability to see the source of information used and also have a subset of information sources not just everything everywhere.

The problem here is that repetition of inaccurate information will make it seem more accurate to an ai scrubber.

Would you trust an msnbc data source over a bbcnews data source? Things like that are going to be issues to hurdle.

The best way to use an AI today is as a limited subset data aggregator and query engine. Letting it grow as you or a AI appointed board approves data sources it is limited to for it's use case.

At least that's how I see it.
 
The best way to use an AI today is as a limited subset data aggregator and query engine. Letting it grow as you or a AI appointed board approves data sources it is limited to for it's use case.
It takes a village to raise a baby approach eh?
 
It takes a village to raise a baby approach eh?
That's actually very true.

Imagine an ai trained on a support desks knowledge base for all levels. Then s customer calls in agent takes the call. As the call is worked the ai is prompting with results and learning from the agen what results are best. Not to replace the agent but to expedite the call helpi to the scent in an automated manner.

Stuff like that is where the value add is. For normal customer service issues it could be all AI driven. System knows the calling number and associated metadata. Speaks on a manner the metadata suggests is known and comfortable to the called and is even enabled for culture based small tall around the callers interests. Ultimately govi of the caller a positive experience and representing the company well.

We're a surprisingly short period of time away from those abilities.
 
This makes it A LOT EASIER.


and rule 42 applies.. there are clearly quite a few 'porn' Local AI's you can run.

IT even lets you easily use your video card solution for AI compute stuff. I'm going to load on on the Meteor Lake laptop and see if I can get it to AI stuff with it's nural processors next!
 
I have zero point no interest in running a local AI, but I do have this question for those that do.

Is the AI output reliably reproducible? For example, if you ask it to "provide a number between one and ten." Will it give the same number every time?
 
I have zero point no interest in running a local AI, but I do have this question for those that do.

Is the AI output reliably reproducible? For example, if you ask it to "provide a number between one and ten." Will it give the same number every time?

Goiodf question. I asked Google's AI (is it still called Bard?)

This isn't conclusive, but after 3 attempts I felt I had an answer. I swear these are all from different queries, although they were run successively and I didn't do anything to clear a local cache or anything (although, I had to click the Generate AI button each time, so I don't think it was cached on my end).

1710099573895.png

1710099539141.png

1710099356396.png


1710099370794.png

1710099385184.png
 
Here is an example using Gemini the replacement to Bard.
1710127043719.png


And here is one with me using Faraday.

1710127208863.png

Keep in mind you can download various LLM libraries tuned to be better at specific tasks. BUT you can not rely on AI to properly calculate scientific results. It just doesn't go that deep and is generally more creative.
 
One thing I'd love for AI to be able to do is run a tabletop RPG. It could handle general creative stuff but it just can't do number worth a **** and tracking them forget about it. It might be a handy tool to draw a character.imwge of give a brief description of a shopkeeper and their personality but not much beyond that.
 
@Grimlakin ... instead of "generate" rephrase to "give" so that it doesn't think you want it to use a RNG.

I appreciate both you and @Brian_B stepping in to run the experiment. The test I initially gave was meant to be an example of an open-ended question with no specific answer. It could have been, "give me a letter of the greek alphabet or a color." Complexity could be built in on top with creative inspiration provided that there is no analytical, empirical, or unique answer. And the point is to see if the AI provides the exact same answer every time for the exact same input. More telling, to perhaps compensate for caching, would be to have multiple people give the same inputs and compare the AI outputs.

@Brian_B's example is interesting because the AI gave the number 5. If you give a human the same input, statistically the most frequent answer would be 7.
 
I just tried “what is your favorite color” - same method using Google Search

Blue
Blue
Blue
 
Become a Patron!
Back
Top