2016-07-18

The catch of the question is that history has shown that there does not
exist a business case for optimizing software to be faster than human
psychology can withstand, regardless of the computational power of the
computer. If C# is mainly used for business applications, then in 2016
the most stringent limit is set by mobile phones and tablets.

First, I really don't understand what history has shown means but there are many business cases to encourage companies to optimize their software to be faster and utilize the full potential and power of computers.

Languages aren't really limited to specific hardware or operating system so I think that in this case you're actually referring to the .NET framework as in the tooling and libraries and not really to C#, the language.

Scientific software and technical software (simulations, control, medical equipment, etc.)
tends to have higher requirements to robustness, efficiency, lack of
flaws than business software does, but if the parties, who finance the
development of C#, use C# mainly for business applications, then there
is no motivation for being any more stringent with C# requirements than
the typical business application use case requires. By stringent I mean
RAM usage, application start-up-time (Java apps were slow to start),
speed optimizations, the amount of thought put into stdlib API design
to keep the API as small as possible, while minimizing data copying,
RAM-allocation requests, execution time that is spent on executing
constructors and object initialization in general, while also allowing
succinct application code for as many application types as possible,
maximizing data locality(in RAM), etc.

You're mixing at least 3 things and put them all in the same bucket! so let's clarify these things first:

There is the C# the language that allows you to express logic and intents into source code.

There is the C# compiler that takes the C# source code and generate IL code that the platform knows how to execute.

There is the CLR that basically handle the actual execution of managed code.

Now, I didn't note these 3 things to educate you, I'm sure you know that fairly well but you can't take everything and call it C# and start throwing words because not everything is dependent on the language itself, in fact, most of the optimization is being done at compile time by the JIT at the CLR level, not to mention that .NET languages are constrained by the CLR much like Java (Scala, Groovy, Kotlin) is constrained by the JVM unlike C++ where the language is generally constrained only by the compiler itself and nothing more!

Sometimes, depends on the feature, an improvement to something, especially when it comes to performance and more specifically to hardware resources and efficiency it requires improvements to the CLR and/or adding/updating API at the framework level to provide access to the feature itself and many times this will happen first before adding the feature as a citizen in the language, in this case C#.

A counter-example is the Java VM and Java stdlib, which had a nice
feature of having a kind of GUI library at its stdlib that had the
property that whenever the GUI code worked without crashing on one
operating system, then it actually STARTED AND WORKED on another
operating system in stead of crashing, unlike wxWidgets and Qt and GTK
GUI apps do. Unfortunately the Java VM was and is clearly a failure
starting from the utter nonsense how its console applications (java,
javac) console API was defined, combined with the slow start-up, initial
sluggishness was OK for business apps, but could have been avoided by
more careful software design (which they actually, eventually, fixed,
but with a delay of at least multiple years).

I don't know what was your point here but if you will elaborate, maybe I can clarify it or at least share my point of view.

All in all, the pattern with the Java, Windows, business software
development in general is that technical quality has lower priority than
shipping features and the only actual requirement to technical quality
of business software seems to be human psychology, not what is
technically available to software developers.

Again, can you be more specific? you seems to say things but don't provide any data to really base your arguments.

I understand that I may be just letting off steam here, getting a bit
off-topic from my original question, but honestly, as a person, who
loves automation and loves tools that check for flaws of developers and
prevent me from accidentally doing some stupid mistakes that I self
recognize as a mistake (not the cases that some "best practice" "guru"
thinks to be a mistake at some book and I intentionally consider not to
be a mistake), I still have not understood the efforts, where amateurs,
who do not spend at least a few weeks on studying the basics of software
development(that cover the basics of algorithmic complexity, memory
allocation related issues, how to modularize one's work, basic OO, some
basics about threads and lack of reliability of internet connections and
the possible issues with application state, data consistency, a little
bit of security, regular expressions, etc.), are encouraged to write
software applications. I'm all for the idea that just like I do not have
to be a cook to create myself a sandwich and it helps me a lot in my
life to be able to create myself sandwiches, people with no IT
background should learn to write simplistic scripts, know some basics
about software development. Even secretaries benefit by writing their
own libraries/macros for spreadsheet applications. But the idea that
amateurs could be made to create any at least remotely decent
applications is just beyond me. Scripting a game, fine, but anything
application-like has just too many aspects to consider for an amateur.

What's your point here?

I mean this seems like a rant about amateurs that are capable of writing code and are doing this probably for fun and you seems to roll your eyes because they can? even if they aren't doing it for fun why this needs to be your own business?

That is to say, anything that is purely business oriented seems to have
very low demand for technical excellence and is therefor a bad
investment, in terms of library development, for technically more
skillful people. What is the plan for C#?

That's not contradictory,
that's an assumption probably based on your own experience and observation but it doesn't have to be like this and I really fail to see the relation to C#, the language.

To find what's the plan check GitHub! maybe raise an issue and ask them about it or look at the docs.

In the past it was to compete with Java, because the people at Sun were
stupid enough to stop Microsoft from using Microsoft Java, but the main
audience of Microsoft has always been business software users, not
scientists and engineers. The idea that scientists and engineers will
stick to Fortran and C++ for speed does not necessarily hold, because a
lot of data nowadays is in text form and the "Big Data" movement
requires text processing, which is nasty in C/C++/Fortran, to say the
least. The GNU R, Scilab, etc. are clearly not the fastest possible
choices and, as demonstrated by the vast variety of scientific Python
libraries, a proper programming language is required to create more
complex data processing/analyzing routines. Java is out of the game
thanks to Oracle, but if the C# will not pick up some of the 2016
scientific Python users/developers, then C# will become another COBOL,
not another Fortran. (Fortran is terribly archaic, but thanks to the
scientific users its libraries have such a high quality that Fortran is
still relevant and will probably stay relevant in numeric computation.)
Even the C and C++ have survived mainly because engineers,
non-business-software-developers, find those languages to be useful.
Pascal, Delphi as mostly business software oriented languages have
practically died, with a small exception of

Where do you get these assumptions from?

It's funny that you speak about scientists and engineers and you have no data or real analysis to base your arguments but anyway, the language be it C#, C++, Java, Python, Lua, JavaScript doesn't define your expertise in the software industry.

Just because you use C++ doesn't mean you're engineer and just because you use Python or Haskell doesn't mean you're a scientist or researcher.

To really answer your question, I really invite you to have a look at the following issue at the Roslyn project on GitHub State / Direction of C# as a High-Performance Language

Another thing to look forward to is .NET Native, I don't know but maybe at some point they will expand their support and allow us to use this for anything beyond UWP.

Show more