“The function resets the cache not only when necessary, but also on the “order” of any application.
Linus Torvalds, the Creator of the Linux kernel, opposed the addition of the L1 data cache reset function when context switching to the Linux kernel version 5.8. This feature was proposed as a protection against vulnerabilities of the Spectre class and other cache leaks.
The problem is that the function resets the cache not only when necessary, but also at the “order” of any application. In multitasking OSS such as Linux distributions, this will reduce the performance of not only the application itself, but also the rest of the processes. Of course, this state of Affairs is not suitable for highly loaded server systems. According to Torvals, resetting the L1 cache makes sense only for Intel processors, and where it is not required, the function will be superfluous.
In fact, as usual, the Creator of the Linux kernel expressed himself much more clearly. “I don’t want some apps to think,’ Oh, I’m so special and beautiful, and I’m such a pretty little flower, that I want to flush out the L1D every time I switch tasks, regardless of what CPU I’m running on or whether there’s a problem at all, ‘” Torvalds said.
The developer from Amazon is responsible for the patch that resets data from L1 when changing contexts.
However, neither he nor Intel representatives have yet commented on the proposed feature in any way. Whether it is really so necessary and effective, they also do not specify.” (src)
On Mon, Jun 1, 2020 at 10:01 AM Ingo Molnar wrote: > - Provide an opt-in (prctl driven) mechanism to flush the L1D cache on context switch. > The goal is to allow tasks that are paranoid due to the recent snoop assisted data > sampling vulnerabilites, to flush their L1D on being switched out. Am I mis-reading this? Because it looks to me like this basically exports cache flushing instructions to user space, and gives processes a way to just say "slow down anybody else I schedule with too". I don't see a way for a system admin to say "this is stupid, don't do it". In other words, from what I can tell, this takes the crazy "Intel ships buggy CPU's and it causes problems for virtualization" code (which I didn't much care about), and turns it into "anybody can opt in to this disease, and now it affects even people and CPU's that don't need it and configurations where it's completely pointless". To make matters worse, it has that SW flushing fallback that isn't even architectural from what I remember of the last time it was discussed, but most certainly will waste a lot of time going through the motions that may or may not flush the L1D after all. I don't want some application to go "Oh, I'm _soo_ special and pretty and such a delicate flower, that I want to flush the L1D on every task switch, regardless of what CPU I am on, and regardless of whether there are errata or not". Because that app isn't just slowing down itself, it's slowing down others too. I have a hard time following whether this might all end up being predicated on the STIBP static branch conditionals and might thus at least be limited only to CPU's that have the problem in the first place. But I ended up unpulling it because I can't figure that out, and the explanations in the commits don't clarify (and do imply that it's regardless of any other errata, since it's for "undiscovered future errata"). Because I don't want a random "I can make the kernel do stupid things" flag for people to opt into. I think it needs a double opt-in. At a _minimum_, SMT being enabled should disable this kind of crazy pseudo-security entirely, since it is completely pointless in that situation. Scheduling simply isn't a synchronization point with SMT on, so saying "sure, I'll flush the L1 at context switch" is beyond stupid. I do not want the kernel to do things that seem to be "beyond stupid". Because I really think this is just PR and pseudo-security, and I think there's a real cost in making people think "oh, I'm so special that I should enable this". I'm more than happy to be educated on why I'm wrong, but for now I'm unpulling it for lack of data. Maybe it never happens on SMT because of all those subtle static branch rules, but I'd really like to that to be explained. Linus (src)
Torvalds: the C SuperHero
Linus loves programming and has learned coding at a very early age.
He started in Assembler then moved to C and has sticked to it since.
He can foresee what Assembler code the C compiler will create and loves micro management.
He said, that from a computer’s perspective, writing programs in the C way makes sense.
Thanks to this massive C geek super power, his love to programming and of course the GNU Compiler (and the C Super Powers of Mr Stallman) gave existence to the highly resource efficient yet secure GNU Linux operating system kernel that is now running on so many devices, and in contrast to Windows: it is Open Source, it (should) respecting the user’s right to privacy and updates are eternally free.
Thanks all involved.
But yes… upgrading from Debian 7 to Debian 10 might not work. Just as in Windows re-installation from scratch might be required.
Also installing too much programs on GNU Linux will just get the user in the same kind of trouble as Windows users.
C++ is horrible
“C++ is a horrible language. It’s made more horrible by the fact that a lot of substandard programmers use it, to the point where it’s much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do *nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C.” (src: reddit)
On Wed, 5 Sep 2007, Dmitry Kakurin wrote: > > When I first looked at Git source code two things struck me as odd: > 1. Pure C as opposed to C++. No idea why. Please don't talk about portability, > it's BS. *YOU* are full of bullshit. C++ is a horrible language. It's made more horrible by the fact that a lot of substandard programmers use it, to the point where it's much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do *nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C. In other words: the choice of C is the only sane choice. I know Miles Bader jokingly said "to piss you off", but it's actually true. I've come to the conclusion that any programmer that would prefer the project to be in C++ over C is likely a programmer that I really *would* prefer to piss off, so that he doesn't come and screw up any project I'm involved with. C++ leads to really really bad design choices. You invariably start using the "nice" library features of the language like STL and Boost and other total and utter crap, that may "help" you program, but causes: - infinite amounts of pain when they don't work (and anybody who tells me that STL and especially Boost are stable and portable is just so full of BS that it's not even funny) - inefficient abstracted programming models where two years down the road you notice that some abstraction wasn't very efficient, but now all your code depends on all the nice object models around it, and you cannot fix it without rewriting your app. In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don't screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don't screw things up with any idiotic "object model" crap. So I'm sorry, but for something like git, where efficiency was a primary objective, the "advantages" of C++ is just a huge mistake. The fact that we also piss off people who cannot see that is just a big additional advantage. If you want a VCS that is written in C++, go play with Monotone. Really. They use a "real database". They use "nice object-oriented libraries". They use "nice C++ abstractions". And quite frankly, as a result of all these design decisions that sound so appealing to some CS people, the end result is a horrible and unmaintainable mess. But I'm sure you'd like it more than git. Linus
“Java what a horrible language”
“Disregard of why Linus Torvald think that Java is garbage let’s make our own assessment with regard Java vs C and calculate the global impact.
Performance. C is 5-10 times faster then Java – see standards tests here
did you know that Linus gives kernels not only versions but names?
Kernel 5.7: # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 7 SUBLEVEL = 0 EXTRAVERSION = NAME = Kleptomaniac Octopus
so what does mean?
(Greg names his versions also)