There are a lot of negative myths about .net which people tend to use to favor the traditional languages like C++ above .net. I’ve busted the ones I read frequently:
- The GC is really slow
malloc
is way slower! The Garbage Collect of .net actually is faster than any Unmanaged code for it nows whether a value is a reference (pointer) and therefore can move objects in the memory. The GC puts objects of about the same age (generation) close to eachother in the memory. Objects tend to refer and use objects in the same generation. The processor itself doesn’t directly load a value from the memory but loads a whole block of a few KiloBytes in the Cache. When the processor directly caches all the objects which one object uses it just runs a lot faster for working from the cache is a lot faster than recaching different parts of the memory over and over again which happens with unmanaged languages which just put objects where there is free space. - Interpreting that stupid Intermediate Language is damned slow
.Net doesn’t interpret its IL, it compiles and optimizes IL runtime - Compiling runtime is very slow anyways
(That compiling a C++ is slow doesn’t mean that .Net is slow) It saves a lot of time for compiling at runtime allows great optimalisations like getting rid of unreachable code and inlining depending on the current runtime variables. Also operations can be compiled with processor specific optimalisations from one IL source. Most of the resource intensive compiling is done at the startup of the application, it is done while the program is running too but that really makes it a lot faster instead of slower - If I write assembly myself it will be way superior to anything .Net can generate
.Net can’t make all the optimalisations possible for it would take longer to analyse code than the optimalisation would gain. But usualy it creates still very optimised code. The big problem with writing very optimised assembly yourself is that the most optimised code is very processor specific and would be very hard to port, and even worse to maintain. Wanting to add one little extra feature could let you rewrite the whole code again. (Yes I indeed have made programs with assembly). Languages which avoid this a bit like C++ still require you to make a different build for every specific processor when fully optimising. Also it is nearly impossible to debug fully optimised unmanaged code but in .Net it still provides you with at least the functionname in which it has happened with the offset (try to accomplish that with C++ in release mode) - The runtime is soooo damned big, it sucks
20 Mb’s isn’t a lot. It only has got to be downloaded once, and the .net framework is in Windows Update so everyone who updates his computer would have it installed by now. Usualy there is room enough on your software installation CD to include .net, it is more than worth those 20 mb. Also languages like C++ require certain runtimes which arent that cooperative. Does ‘DLLHell’ ring a bell? - The .net library naming SUCKS
Yeah.. its naming is different than what MFC uses. At least the naming is very consistant which is way more important than ‘nice naming’, although when seeing some C++ API names used I still wonder why someone could prefer that above the clear .Net naming - The .net library itself sucks
Really? Like what? What can’t it do? - You can’t use API calls like CreateFile
Now I can’t…
[DllImport("kernel32.dll", SetLastError = true)]public static extern IntPtr CreateFile(string lpFileName, uint dwDesiredAccess, uint dwShareMode, IntPtr lpSecurityAttributes, uint dwCreationDisposition, uint dwFlagsAndAttributes, IntPtr hTemplateFile);
… now I can! - .Net sucks cause it is Microsoft
Yeah, so what. .Net is a ECMA standard so you are pretty free to use it, and if there is a catch then that one hasn’t been exploited yet for on linux people are happily using mono to run .net stuff
“The runtime is soooo damned big, it sucks”
The thing is, this whole run-time stuff has to get into memory. Why do I need like 40MB of memory for some applications when a similar application (Delphi) needs like 400k? Soooo damned big, it sucks.
Actually, it hasn’t get into the memory at once. A modal .net application (debug) takes about 7 mb’s of Ram. Also when one .net app loads the .net framework’s modules the others use the loaded ones. I don’t see delphi doing that. When only using 1 app it is bad. But at the moment a lot of applications use .net. Windows server 2003 uses a lot of it. Even my radeon video card control panel uses .net.
“Compiling and optimizing” is interpretation. Both malloc and new are operations of the memory management portion of the operating system – they’ll take the same amount of time. Having another layer on top (the garbage collector) will add more time.
So, in order for me to load a hello world program, it has to load 7 to 40 megs (depending on who you talk to) of ram in order to print “Hello World”? Umm… ok, that sucks any way you look at it, even if the memory is already in the system.
Comparing your video card’s control panel application to an application that does more than just change registry settings and call external drivers is a logical fallacy – it’s comparing apples and ferrarris.
“Compiling and optimizing” is interpreting before execution. An interpreted language would process bytecode while executing. A language using a Just In Time (JIT) Compiler interprets the bytecode before starting anything (when you load the exe) or even before that (when installing the program) and needs only to change very little thing to optimize runtime. To elaborate: a specific simple do loop which takes 0.6 seconds in Python takes only 0.001 seconds in C# for C# executes the loop by compiling the code to native assembly, Python just uses compiled assembly to interpret every bytecode.
New requires a lot of extra overhead for it is required for classes. Malloc just returnes a pointer to a piece of allocated memory, new does a lot more. New takes more time. NOW I am talking about the C++ new. The new used in c# is offcourse NOT the new as in C++! The new in C# uses the garbage collector, not the free memory chain. .Net actually doesn’t touch Malloc or the C++ new AT ALL (few exceptions for for example COM interop though :-P).
Offcourse the GC requires to allocate memory by means of system routines, but the GC allocates big amounts at one time and uses very tweaked operations, I’ll assure you that .Net is at least 2 times faster in allocation and destruction.
‘Hello World’ isn’t a modal application. It would require about 200 Kb. Ok.. in C++ or assembly you could do it better. But a model application in C++ uses a lot more memory than .Net actually ;), this for C++ doesn’t share a lot of used memory with other proccesses. When 2 .net processes load the same assembly (with the same security permission though) it will only exist once in the memory.
I did not compare a video control panel with a program that changes a memory setting, the only instance of the word ‘registry’ on this page before I responded was in your comment :p.
Did you know I just repeated myself by posting this comment? Thought that explaining my point once would be enough 😉
The only difference between “Interpretation” and “JIT Compiling/Compiling at runtime” is when it does the work. Regardless of the terminology we prefer, whether it spends all of the time upfront or while it’s executing is semantic – the time is still spent (caveat: there are some optimizations that can be done in both cases, but they are way too many to take into account here…) Out of curiousity, Perl is an interpreted language, yet it gets compiled prior to being run, so which would you consider it – JIT or Interpreted?
new, as far as I know, only requires more overhead IFF initialization of the object must be performed. No initialization, nothing to do, no time spent. It essentially comes down to an empty function call, of which I hope you won’t hold the ~half a dozen pushes and pops against me. 😉 The actual memory allocation is performed by the memory manager in the operating system, not ::new() or malloc(). That is where the bulk of the time will be spent (assuming an empty constructor… which I forgot to say before, thanks :)) BUT, what I didn’t know whas the bit about the GC doing some memory management on it’s own. That would definitely make it faster and a bonus to using it.
The reference to your control panel app (which does, by the way, only use the registry [Look! That word again!], so my statement was accurate and applicable) was to show that comparing your quickie .Net control panel application to an application that does a lot more was, as I said before, comparing apples and ferrarris.
I wasn’t trying to berate you or prove you wrong (and I’m not now.) If it comes across that way… sorry. :p
Perl is an interpreted language. It compiles itself to a bytecode which is than interpreted. Just as PHP if I recall correctly. Indeed, .Net does interpret to execute, whether this is just at the installation. But we tend to call .Net not an interpeted language for it doesn’t use interpretation in the execution loop.
Perl is compiled indeed, to its own bytecode. There is a big difference between an interpreted bytecode and real assembly. What .Net does is that it compiles a piece of Common Intermediate Language (CIL a.k.a MSIL) to real processor specific assembly! It does even more, it compiles some stuff just before it is executed so there can be tremendous optimalizations. Therefore the overhead of compiling at runtime is neglected by the advantages of optimalizations and it is totaly platform independant.
I like to stress again that Perl doesn’t compile to real assembly but compiles to an interpreted bytecode.
Some links on the GC containing also some information why the GC makes some execution lots faster than ever possible in C++ and C: (generations and CPU cache)
http://blogs.msdn.com/joelpob/archive/2004/02/26/80776.aspx
http://msdn.microsoft.com/msdnmag/issues/1100/GCI/default.aspx
Well.. A video control panel actually does a bit more than registry editing. It interfaces with video card which requires a lot of interop. Also it needs to manage some complex visual algorithms when for instance previewing or calculating possibilities. It is just that I was only certain that it used .net.. Doom 3 also ships the .net runtime for instance, but I had my doubts doom 3 would use .net ;).
I am currently writing a graphic engine with C# and it is performing almost as good as my C++ counterpart; although there are certain cases that C++ is really required.
Most applications aren’t big ferrari’s, just look on your computer. Is your browser a ferrari? Is your explorer a ferrari.. etc?
I always like a discussion, so no offence taken ^^.
Rather discussing than silently ignoring :p.
I prefer C# over C++, but I have got two comments:
1. GS is often considerably slower than malloc. If somebody says the opposite, he must be paid by Microsoft or somwthing… 😉 Fortunately, the extreme cases, like GC version of code based on System.String being more than 100 times slower than the very same algorithm written in C++ with std::string, can be usually rewritten to reach better results.
2. stdlib of C++ has many many many more algorithms making the scientific computation faster and easier. Hopefully sometime in the future we’ll see more than 5% of it in .NET libraries. On the other side, they’re usually not needed in a non-scientific applications.
I don’t agree. Malloc may be faster in only allocating a lot than a GC, but remember that an application does a lot of free and malloc calls in its lifetime which causes the heap to be fragmented, this results in bad locality of reference, wasting memory and slow allocation.
A GC on the other hand does fragment too, but can compact and reclaim memory, which even increases locality of reference, which ultimately can yield better performance. This advantage will only increase in the future while caches keep getting bigger and faster relative to the main memory.
True, the C++ stdlib is nice, but there are enough third party libraries for anything I ever wanted.
This blog is since long old and dead but I thought I’d throw in my few cents for the late night browsing guys.
C# rocks. It truly does. Personally I have about 10 years of C++ experience and about 2 years of C# experience. I am a professional programmer and I can’t really understand why some people have this almost ritualistic thing against adopting C#.
All that productivity that C# offers is uncompared in the C++ world. I don’t want you to compare “Hello world” which no one gives a fuck about anyways, or some million integer iterations example. Who codes programs like that anyways? I don’t. My mom doesn’t. You shouldn’t.
Compare instead with a live program, a game or a service. I don’t know about you but I still tend to spend 50-70% of my time fixing bugs, checking library documentations, trying to get the whole thing compiling etc. With C# and Visual Studio IDE I spend almost the full time doing what I am supposed to do, not tracking bugs or searching the dozen different documentations for my libraries as I do in C++.
Also take in mind the budget cost for developing applications professionally. If you are a hobbyist who have no financial ties to your project, fine. For the rest of us, this is a pretty big issue. Now I can’t really say that C# will cut your development time in half. I can’t say it cuts it 10%. How much it will cut down times has to do with a lot of stuff, but what has always been true (except the period when I was so fresh to C#) is that C# owns C++ big time when it comes down to creating a program for your customer in time. Bugs are much easier to find and fix in C# than it has been in C++. Refactoring is a breeze, but that is a IDE feature rather than a language feature. Still, who cares? It get the job done. Quickly.
One of the biggest barriers people have a hard time breaking up with is performance issues. “Oh the dreaded GC” etc. So what, it is not like it freezes up your whole application. Sure, there are some special cases where you really need (mission critical, peoples lives are at stake) that extreme fine precision control over memory, performance or what have you, or maybe you’re developing the next cutting edge super game engine which handles a bazillion polygons each frame. Sure, you can do unchecked code for that or import dlls. But in 99.9% of all the cases this isn’t an issue to begin with, compared to the development costs maintaining C++ code.
It is as previously mentioned all about choosing the right tool for your job. Is your job about making reliable applications that are top of the line? Is it about making a background service monitoring process information? Is it about making a blockbuster video game? Are you coding a space shuttle microprocessor? What are you doing? I would use C# for all those cases except the space shuttle microprocessor which I feel I don’t have the domain expertise to tell how to go on. Yes, I’ve made applications and yes I’ve made games in C#. It works fine.
What I can’t understand is why people go from assembler to C, from C to C++ but can’t go from C++ to C#. All progress is made to cut down development costs, and each step adds an additional layer of complexity for the compilers to cut down complexity for the programmer so the programmer can make more complex applications. Silicon isn’t expensive. Man-months are. I agree that C# isn’t perfect but it is a hell of a lot better than C++. Name me any language you think is perfect.
Why stay and linger around in 30 year old computer languages with tons of books and documentations of all the quirks, gotchas, problems and issues you need to be aware of, coding all your neat little classes to make it a little bit safer (a little bit more towards the work already put into C#) when all you crave for already exists. It’s out there. Don’t be a fool and grab it. It’s not like you sell your soul to the devil. C++ is still there for those mission critical heuyes space shuttle PICs (if you’d even use C++ for that).
C# has evolved alot since 2005-2006. I can’t say I’ve ever laid eyes on a cleaner and more productive language.
The comments are still coming in at a steady (slow, but steady nevertheless) rate :).
I personally wouldn’t use C++ for a space shuttle — the behaviour of the language is way more complex. Having pointers makes a proof of correctness hell. MsIL on the other hand, is a lot more amenable. Pure functional languages even more, if you are into them :).
I agree with you. C# is the right tool in a lot of cases. Personally, I like my language even more extreme. Almost everything I write, now, I write in Python. It’s slow, sure. However, the volume of third party libraries is great; the language itself cooperative and when I do want performance, I write a module in C. Yet I have to agree that Python will never be the Lingua Franca, which Java is and C# will become — it is a lot easier to write ugly code in Python, whereas in C# you are forced to think a bit.
What do you think about Python, Ruby, D, Scala and Clean? I still think C# is a great language, though.