Linux-libre 5.3 kernel is available now

16 Antworten [Letzter Beitrag]
andyprough
Offline
Beigetreten: 02/12/2015

I'm compiling it tonight. Here's the source, compliments of FSF Latin America: https://www.linux-libre.fsfla.org/pub/linux-libre/releases/5.3-gnu/

jxself will probably have binaries up in his repository in the next few days.

One nice set of features for libre users of this kernel version is a bunch of work on the EXT4 file system, including speed improvements, quality improvements and cleanup of code for probably the most commonly used file system for laptops and desktops: http://lkml.iu.edu/hypermail/linux/kernel/1907.1/02277.html

jxself
Offline
Beigetreten: 09/13/2010

I had it finished and ready about 5 hours or so after Linus Torvalds released it.

From there, the various mirrors need to sync for it to be fully visible to everyone so if it's not visible in your package manager yet just wait.

andyprough
Offline
Beigetreten: 02/12/2015

> I had it finished and ready about 5 hours or so after Linus Torvalds released it.

So you run the de-blobbing yourself then? You aren't waiting on fsfla's source tarballs?

jxself
Offline
Beigetreten: 09/13/2010

"So you run the de-blobbing yourself then? You aren't waiting on fsfla's source tarballs?"

It is a little bit of both. I work closely with lxo at FSFLA, communicating through IRC in #linux-libre on irc.freenode.net. As soon as a new kernel version comes out we both know about it because a bot announces it. I run the existing deblob scripts, on the assumption that there isn't any deblobbing changes needed. There usually isn't. In this case there were no deblobbing changes needed from 5.3-rc7, 5.3-rc8, or 5.3 final.

I take advantage of the fact that deblobbing changes are usually not needed and get started as soon as the version comes out at kernel.org. I build the packages but it doesn't get pushed out live. Rather, it then goes into a staging area where it sits and waits until lxo determines if deblobbing changes really are needed or not as he works on the official FSFLA source tarballs. As long as things are good then I have just saved time by deblobbing and compiling it immediately rather than waiting and got the version out to people sooner. (Sooner meaning: As soon as FSFLA's source tarballs come out the my binaries come out at the same time as lxo pushes both out to the public.)

In those rare cases where deblobbing changes are needed then I'll delete what I built and re-do it when lxo releases the updated deblobbing scripts. But that usually is needed such that I can save time by not waiting. But I say it's a little bit of both since I do wait for lxo's approval.

It's all part of the dedication to keep the repository fresh (hence the name "freesh"), serving up freshly-baked kernel goodness in a quick manner.

andyprough
Offline
Beigetreten: 02/12/2015

That's impressive teamwork! How long does it take you to run the de-blobbing scripts? I assume you have a much higher power system than me. I'm using an old i3 laptop, takes me about 4 hours to compile Linux-libre. I usually run the make process while I sleep at night.

jxself
Offline
Beigetreten: 09/13/2010

The deblobing and compiling are separate things. I haven't precisely timed it. 10 minutes? 15? Probably closer to the 15 area.

4 hours seems like a long time. Is that just for 1 single kernel package? Even on the older machine with an Intel Core 2 Quad it didn't take that long for 1 kernel package.

My kernel compiling is done on a custom-built machine dedicated for this purpose. It has an Asus KGPE-D16 motherboard. With libreboot of course. And 2 x 16 core CPUs so 32 CPU cores in total. I've put in a lot of RAM and set up a RAM disk to hold the kernel source in RAM. In this way the compiling process is faster because nothing leaves RAM for slower disks. It takes under 15 minutes to make 1 kernel package.

But that's just for 1 CPU architecture. I currently build kernels for 6 architectures: amd64, arm64, armhf, i386 (twice), and ppc64el. Only 5 are listed because i386 actually counts twice since I build two for that architecture: One for newer 32-bit processors that support PAE and another for older ones that do not (the nonpae kernel.)

Multiply that out by all of the supported kernel versions (currently 4 of them -- 5.3, 5.2, 4.19 and 4.14 but 5.2 should become EOL in a few weeks) because each kernel version is compiled for every architecture and then you have the grand total. There are normally 3 -- the latest kernel version and then the two LTS ones -- but during the transition where there's a major new kernel release it slips up to 4 for a few weeks until the prior major release becomes EOL before going back down to 3.

andyprough
Offline
Beigetreten: 02/12/2015

How do you compile for different architectures then, since I assume you are doing all the work on the one machine? Do you need to set up different vm's or different chroot environments, or can you compile for a separate architecture simply by passing different parameters through your make file? I've always wondered about that -seems very mystical to be able to build for ARM whole on an x86 machine.

So you are running a dual opteron setup then. Totally libre I'm assuming? That's fantastic. How much ram may I ask? That sounds like a killer system.

jxself
Offline
Beigetreten: 09/13/2010

You compile for different architectures by using a cross compiler. It runs on one machine (like amd64) but builds stuff for a different system. Trisquel has some already by installing packages like crossbuild-essential-arm64 which targets 64-bit ARM for one example.

Then you set two environment variables, like:

export ARCH=arm64

This is based on the kernel's name for the architecture based on the names of the directories from https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch

The other is

export CROSS_COMPILE=aarch64-linux-gnu-

This is based on the name of the cross-compiler; the hyphen on the end is important because this name is added to the front of the compiler name so that instead of just running "gcc" to compile, it runs "aarch64-linux-gnu-gcc". That name comes from the name of the compiler; see https://packages.ubuntu.com/disco/amd64/gcc-aarch64-linux-gnu/filelist for an example.

And so with those two things set you're telling the kernel's build system to use that compiler and build a kernel for that architecture.

The rest of the process is identical to the normal process of compiling the kernel: You'll also need to make an appropriate kernel config too and then sit back and wait while it compiles.

andyprough
Offline
Beigetreten: 02/12/2015

That's remarkable. I'd like to buy an ARM SoC just so I can compile a kernel on my i3 laptop now and see if I can get it to run on the ARM system.

One last question if you have time - Do you use make bindeb-pkg as your make process to build the .deb files for your repository? Or a different method for creating the deb file? I'm interested in the best practices for this part.

jxself
Offline
Beigetreten: 09/13/2010

Yes. An example would be this. Adjust as needed since you probably don't have 32 CPU cores and the version number as needed.

make -j 32 bindeb-pkg KDEB_PKGVERSION=5.2.15-gnu-1.0

andyprough
Offline
Beigetreten: 02/12/2015

Right, that's about what I was hoping, very simple. And you can make the deb that way with any cross compiler?

jxself
Offline
Beigetreten: 09/13/2010

Sure but using make is just part of the kernel's normal build process and so isn't really related to cross-compilers per se. You'd do that even if you were not cross-compiling so there's nothing special there in that regard.

andyprough
Offline
Beigetreten: 02/12/2015

Makes sense. On that KGPE-D16, are you limited to 192 GB of ram with libreboot like it says on the coreboot page? Looks like if you filled all 16 slots with max 16GB DIMMS, you'd be up to 256GB, but the coreboot website says nothing above 192GB will run.

jxself
Offline
Beigetreten: 09/13/2010

Yes, I have also read that. I have not tried that much. Mine has 16GB which is a far cry from any limit. :)

andyprough
Offline
Beigetreten: 02/12/2015

Good to know. Prices on newegg for 16 core opterons is only about $15 each. But 16 sticks of RAM would cost me over $500.

ao
ao
Offline
Beigetreten: 07/20/2017

How do you safely update your Trisquel 4.4.0 kernel?

Magic Banana

I am a member!

I am a translator!

Offline
Beigetreten: 07/24/2010

Follow the instructions on https://jxself.org/linux-libre/