Artificial Intelligence confessions
- Inicie sesión ou rexístrese para enviar comentarios
I sometimes use external AI services that do not require registration. I do it through the Tor Browser but they require JS enabled and, in any case, they are SaSS and consume enormous amounts of resources.
So these services have many bad things on freedom and environment but I sometimes use them (I only know two services I can use through Tor Browser without registering). In my case, despite I often have to go through lots of captchas, these services sometimes help me to save lots of time (and resources too) on some specific tasks.
I would like to know about your experiences with these services, if any, and share links, ways to connect, opinions, etc. But not sure if, despite being in the Troll Lounge were offtopic conversations are allowed, discussing about SaSS services could be going too far. If you think that is the case, please let me know, I will happily understand.
I am also considering to locally run some AI software on my own server, but I do not know about the resources it may require to obtain results comparable to those I get from the SaSS services I know. I quickly tried once on my computer, but the computer could not cope and crashed.
You need enough RAM to store the billions of parameters of an LLM. Some models are "distilled" and require less RAM. I installed Alpaca from Flathub:
$ sudo apt install flatpak
$ flatpak remote-add --subset=floss flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak install flathub com.jeffser.Alpaca
In Alpaca, I then added DeepseekR1 with 14B parameters and it takes less than 13 GB of RAM.
I asked the LLM something to measure: "What types of recurrence relation can be solved using generating functions?"... and it literally "thought" for tens of minute (I do not have a discrete GPU that would accelerate the computation, because those rely on non-free software)! I had never experienced that before, but the question is truly hard and the answer was OK, despite the use of a distilled model.
13GB of ram huh... that is a hell of a lot.
I am glad I don't plan to use AI... that is an insane waste of resources. Although I suppose it might have some purposes. But I would say most of the uses that are ethical are just for fun.
13GB of ram huh... that is a hell of a lot.
Well, that is relative. The non distilled Deepseek R1 model has 671 billions parameters and you probably need one of the servers with at least 512 GB of RAM behind the link I gave to amuza. On the other extreme, the smallest distilled version of Deepseek R1 has only 1.5 billion parameters. Looking at the list in Alpaca, I discover a related fined-tuned version, called DeepScaleR, with 1.5 billion parameters too: https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2
I am installing it now, to test how faster but worse the answer to the hard question I made yesterday (I did not realize at the time, it would take that much time and I did not clock).
Still, I bet this AI functionality requires a fairly powerful computer with a lot of ram.
Processor probably has to be either A:
Something like the TALOS 2 and/or a desktop computer in general.
or B:
It has to be like beyond like an eighth gen processor. I guess my point is, neither gnuboot or canoeboot can use such a functionality.
Although there are libreboot and coreboot devices new enough to do this I suppose.
Even older laptops such as the T400, the T500 or the X200 can have 8GB of RAM, right? That should be enough to use DeepScaleR since, as I wrote, even using it through Alpaca, less than 5GB of RAM is required. The answer time will be longer though, because of the older CPU and the DDR3 RAM. And there are even more distilled models. Qwen3 has a 0.6-billion-parameter version and another one with 1.7 billions. And, yes, their weights are licensed, like Deepseek R1's, licensed under a free software license (the Apache 2 license).
Oh, you said 13GB of ram that's why I said that and it sounded like the AI would use a lot of cpu power I thought too.
Nevermind then.
Although I wonder how long it would take on a sandy bridge vs newer stuff though.
Also wonder how much heat it would create too.
I mean it sounds like it uses a lot of resources.
Although I wonder how long it would take on a sandy bridge vs newer stuff though.
Well, you can try if you have such a machine. I don't.
Also wonder how much heat it would create too.
Quite a lot: all cores are used. During winter in a cold country (unlike mine), it is not a waste though.
> I asked the LLM something to measure: "What types of recurrence relation can be solved using generating functions?"
Have you ever asked that to a SaSS AI?
If so, how long did it take to give you a similar answer?
I do not do SaSS. The answer certainly comes much faster because the server is probably one of those: https://en.wikipedia.org/wiki/Nvidia_DGX
With the DeepSclaleR model, running in Alpaca, less than 5 GB of RAM are required (including the RAM to run Alpaca itself) and the same question ("What types of recurrence relation can be solved using generating functions?") is answered in ~6 minutes. The answer is not as good as that of Deepseek R1 with 14 billions parameters. It says that the solvable types "include both linear homogeneous and nonhomogeneous" recurrence relations, which is true because of the choice of verb, "include", but kind of deceptive: it does not mention that some non-linear recurrence relations are solvable in this way, for instance to find a closed-form expression for Catalan numbers, as DeepSeek R1 answers.
I asked DeepSeek R1 the same question again, in a new conversation, to clock the time it takes: ~40 minutes. I repeat: the question is particularly hard (I had never seen such times) and everything runs on a six-year-old CPU (an i7-10510U though).
If you are interested in the answers (and the preceding thoughts, actually written after in the file) of the two models regarding the question, they are attached.
| Anexo | Tamaño |
|---|---|
| DeepScaler-answer.txt | 7.84 KB |
| Deepseek R1-14B-answer.txt | 14.19 KB |
your i7-10510U took 40 minutes?
I wonder how long sandy bridge would take then.
I posited your question to Dickduckgo's ai - Mistral version. Uploaded the answer which came rapidly, maybe 2 seconds. How does it compare?
| Anexo | Tamaño |
|---|---|
| FirstAIQuestionFromMagicBanana.txt | 2.75 KB |
It is more complete. Executing on GPUs is much faster, but I do not think it is possible using a free firmware and driver.

