Why do game developers prefer Windows?

  • Is it that DirectX is easier or better than OpenGL, even if OpenGL is cross-platform? Why do we not see real powerful games for Linux like there are for Windows?

    There was recently an article regarding this topic with John Carmack.

    This blog article discusses why they (a company) use OpenGL instead of DirectX, it's a must read.

    False premise - where is evidence that game developers prefer windows? I think @user17674 had it more accurately - they want to develop for platforms so they can sell more and make more money:-)

    Virus developers also prefer Windows. It's all about the user base.

    Why Windows programmers prefer DirectX over OpenGL is probably a more interesting question historically. As the link to the Carmack interview below might indicate, DX used to be loathed. It would be interesting to know how it kept enough users for MS to support it until Xbox and modern rewrites made it more popular. Or maybe it was just always good enough so folks stuck with it.

    Why do people rob banks? *Because thats where the money is*.

    @CesarGon: It is not only because of the user base but also for the ease of development.

    @Giorgio: Granted; great development tools and reasonable APIs are important too.

    As I understand it, there are two reasons: (1) Most game devs are primarily targeting Windows, and so rationalize there is no reason to use a cross-platform API. (2) OpenGL may be faster and more powerful, but I recall hearing that the DirectX API is a lot cleaner and easier to use (no link). I, personally, prefer OpenGL.

    @InkBlend the question is why would you limit yourself to windows if there is a faster, more powerfull and cross-platform API which is OpenGL. Wouldn't make more sense to use it and support all platforms?

    That OpenGL may be faster is in dispute - some recent benchmarks (see e.g. http://www.g-truc.net/post-0547.html) show that Valve's results may be an isolated case and the opposite may in fact be the case. I'd also consider that Wolfire blog post to be discredited as it contains many factual inaccuracies (I'd even go so far as to call them outright untruths as the author should have been aware of these things) some of which were subsequently retracted.

  • Nicol Bolas

    Nicol Bolas Correct answer

    9 years ago

    Many of the answers here are really, really good. But the OpenGL and Direct3D (D3D) issue should probably be addressed. And that requires... a history lesson.

    And before we begin, I know far more about OpenGL than I do about Direct3D. I've never written a line of D3D code in my life, and I've written tutorials on OpenGL. So what I'm about to say isn't a question of bias. It is simply a matter of history.

    Birth of Conflict

    One day, sometime in the early 90's, Microsoft looked around. They saw the SNES and Sega Genesis being awesome, running lots of action games and such. And they saw DOS. Developers coded DOS games like console games: direct to the metal. Unlike consoles however, where a developer who made an SNES game knew what hardware the user would have, DOS developers had to write for multiple possible configurations. And this is rather harder than it sounds.

    And Microsoft had a bigger problem: Windows. See, Windows wanted to own the hardware, unlike DOS which pretty much let developers do whatever. Owning the hardware is necessary in order to have cooperation between applications. Cooperation is exactly what game developers hate because it takes up precious hardware resources they could be using to be awesome.

    In order to promote game development on Windows, Microsoft needed a uniform API that was low-level, ran on Windows without being slowed down by it, and most of all cross-hardware. A single API for all graphics, sound, and input hardware.

    Thus, DirectX was born.

    3D accelerators were born a few months later. And Microsoft ran into a spot of trouble. See, DirectDraw, the graphics component of DirectX, only dealt with 2D graphics: allocating graphics memory and doing bit-blits between different allocated sections of memory.

    So Microsoft purchased a bit of middleware and fashioned it into Direct3D Version 3. It was universally reviled. And with good reason; looking at D3D v3 code is like staring into the Ark of the Covenant.

    Old John Carmack at Id Software took one look at that trash and said, "Screw that!" and decided to write towards another API: OpenGL.

    See, another part of the many-headed-beast that is Microsoft had been busy working with SGI on an OpenGL implementation for Windows. The idea here was to court developers of typical GL applications: workstation apps. CAD tools, modelling, that sort of thing. Games were the farthest thing on their mind. This was primarily a Windows NT thing, but Microsoft decided to add it to Win95 too.

    As a way to entice workstation developers to Windows, Microsoft decided to try to bribe them with access to these newfangled 3D graphics cards. Microsoft implemented the Installable Client Driver protocol: a graphics card maker could override Microsoft's software OpenGL implementation with a hardware-based one. Code could automatically just use a hardware OpenGL implementation if one was available.

    In the early days, consumer-level videocards did not have support for OpenGL though. That didn't stop Carmack from just porting Quake to OpenGL (GLQuake) on his SGI workstation. As we can read from the GLQuake readme:

    Theoretically, glquake will run on any compliant OpenGL that supports the texture objects extensions, but unless it is very powerfull hardware that accelerates everything needed, the game play will not be acceptable. If it has to go through any software emulation paths, the performance will likely by well under one frame per second.

    At this time (march ’97), the only standard opengl hardware that can play glquake reasonably is an intergraph realizm, which is a VERY expensive card. 3dlabs has been improving their performance significantly, but with the available drivers it still isn’t good enough to play. Some of the current 3dlabs drivers for glint and permedia boards can also crash NT when exiting from a full screen run, so I don’t recommend running glquake on 3dlabs hardware.

    3dfx has provided an opengl32.dll that implements everything glquake needs, but it is not a full opengl implementation. Other opengl applications are very unlikely to work with it, so consider it basically a “glquake driver”.

    This was the birth of the miniGL drivers. These evolved into full OpenGL implementations eventually, as hardware became powerful enough to implement most OpenGL functionality in hardware. nVidia was the first to offer a full OpenGL implementation. Many other vendors struggled, which is one reason why developers preferred Direct3D: they were compatible on a wider range of hardware. Eventually only nVidia and ATI (now AMD) remained, and both had a good OpenGL implementation.

    OpenGL Ascendant

    Thus the stage is set: Direct3D vs. OpenGL. It's really an amazing story, considering how bad D3D v3 was.

    The OpenGL Architectural Review Board (ARB) is the organization responsible for maintaining OpenGL. They issue a number of extensions, maintain the extension repository, and create new versions of the API. The ARB is a committee made of many of the graphics industry players, as well as some OS makers. Apple and Microsoft have at various times been a member of the ARB.

    3Dfx comes out with the Voodoo2. This is the first hardware that can do multitexturing, which is something that OpenGL couldn't do before. While 3Dfx was strongly against OpenGL, NVIDIA, makers of the next multitexturing graphics chip (the TNT1), loved it. So the ARB issued an extension: GL_ARB_multitexture, which would allow access to multitexturing.

    Meanwhile, Direct3D v5 comes out. Now, D3D has become an actual API, rather than something a cat might vomit up. The problem? No multitexturing.

    Oops.

    Now, that one wouldn't hurt nearly as much as it should have, because people didn't use multitexturing much. Not directly. Multitexturing hurt performance quite a bit, and in many cases it wasn't worth it compared to multi-passing. And of course, game developers love to ensure that their games works on older hardware, which didn't have multitexturing, so many games shipped without it.

    D3D was thus given a reprieve.

    Time passes and NVIDIA deploys the GeForce 256 (not GeForce GT-250; the very first GeForce), pretty much ending competition in graphics cards for the next two years. The main selling point is the ability to do vertex transform and lighting (T&L) in hardware. Not only that, NVIDIA loved OpenGL so much that their T&L engine effectively was OpenGL. Almost literally; as I understand, some of their registers actually took OpenGL enumerators directly as values.

    Direct3D v6 comes out. Multitexture at last but... no hardware T&L. OpenGL had always had a T&L pipeline, even though before the 256 it was implemented in software. So it was very easy for NVIDIA to just convert their software implementation to a hardware solution. It wouldn't be until D3D v7 until D3D finally had hardware T&L support.

    Dawn of Shaders, Twilight of OpenGL

    Then, GeForce 3 came out. And a lot of things happened at the same time.

    Microsoft had decided that they weren't going to be late again. So instead of looking at what NVIDIA was doing and then copying it after the fact, they took the astonishing position of going to them and talking to them. And then they fell in love and had a little console together.

    A messy divorce ensued later. But that's for another time.

    What this meant for the PC was that GeForce 3 came out simultaneously with D3D v8. And it's not hard to see how GeForce 3 influenced D3D 8's shaders. The pixel shaders of Shader Model 1.0 were extremely specific to NVIDIA's hardware. There was no attempt made whatsoever at abstracting NVIDIA's hardware; SM 1.0 was just whatever the GeForce 3 did.

    When ATI started to jump into the performance graphics card race with the Radeon 8500, there was a problem. The 8500's pixel processing pipeline was more powerful than NVIDIA's stuff. So Microsoft issued Shader Model 1.1, which basically was "Whatever the 8500 does."

    That may sound like a failure on D3D's part. But failure and success are matters of degrees. And epic failure was happening in OpenGL-land.

    NVIDIA loved OpenGL, so when GeForce 3 hit, they released a slew of OpenGL extensions. Proprietary OpenGL extensions: NVIDIA-only. Naturally, when the 8500 showed up, it couldn't use any of them.

    See, at least in D3D 8 land, you could run your SM 1.0 shaders on ATI hardware. Sure, you had to write new shaders to take advantage of the 8500's coolness, but at least your code worked.

    In order to have shaders of any kind on Radeon 8500 in OpenGL, ATI had to write a number of OpenGL extensions. Proprietary OpenGL extensions: ATI-only. So you needed an NVIDIA codepath and an ATI codepath, just to have shaders at all.

    Now, you might ask, "Where was the OpenGL ARB, whose job it was to keep OpenGL current?" Where many committees often end up: off being stupid.

    See, I mentioned ARB_multitexture above because it factors deeply into all of this. The ARB seemed (from an outsider's perspective) to want to avoid the idea of shaders altogether. They figured that if they slapped enough configurability onto the fixed-function pipeline, they could equal the ability of a shader pipeline.

    So the ARB released extension after extension. Every extension with the words "texture_env" in it was yet another attempt to patch this aging design. Check the registry: between ARB and EXT extensions, there were eight of these extensions made. Many were promoted to OpenGL core versions.

    Microsoft was a part of the ARB at this time; they left around the time D3D 9 hit. So it is entirely possible that they were working to sabotage OpenGL in some way. I personally doubt this theory for two reasons. One, they would have had to get help from other ARB members to do that, since each member only gets one vote. And most importantly two, the ARB didn't need Microsoft's help to screw things up. We'll see further evidence of that.

    Eventually the ARB, likely under threat from both ATI and NVIDIA (both active members) eventually pulled their head out long enough to provide actual assembly-style shaders.

    Want something even stupider?

    Hardware T&L. Something OpenGL had first. Well, it's interesting. To get the maximum possible performance from hardware T&L, you need to store your vertex data on the GPU. After all, it's the GPU that actually wants to use your vertex data.

    In D3D v7, Microsoft introduced the concept of Vertex Buffers. These are allocated swaths of GPU memory for storing vertex data.

    Want to know when OpenGL got their equivalent of this? Oh, NVIDIA, being a lover of all things OpenGL (so long as they are proprietary NVIDIA extensions), released the vertex array range extension when the GeForce 256 first hit. But when did the ARB decide to provide similar functionality?

    Two years later. This was after they approved vertex and fragment shaders (pixel in D3D language). That's how long it took the ARB to develop a cross-platform solution for storing vertex data in GPU memory. Again, something that hardware T&L needs to achieve maximum performance.

    One Language to Ruin Them All

    So, the OpenGL development environment was fractured for a time. No cross-hardware shaders, no cross-hardware GPU vertex storage, while D3D users enjoyed both. Could it get worse?

    You... you could say that. Enter 3D Labs.

    Who are they, you might ask? They are a defunct company whom I consider to be the true killers of OpenGL. Sure, the ARB's general ineptness made OpenGL vulnerable when it should have been owning D3D. But 3D Labs is perhaps the single biggest reason to my mind for OpenGL's current market state. What could they have possibly done to cause that?

    They designed the OpenGL Shading Language.

    See, 3D Labs was a dying company. Their expensive GPUs were being marginalized by NVIDIA's increasing pressure on the workstation market. And unlike NVIDIA, 3D Labs did not have any presence in the mainstream market; if NVIDIA won, they died.

    Which they did.

    So, in a bid to remain relevant in a world that didn't want their products, 3D Labs showed up to a Game Developer Conference wielding presentations for something they called "OpenGL 2.0". This would be a complete, from-scratch rewrite of the OpenGL API. And that makes sense; there was a lot of cruft in OpenGL's API at the time (note: that cruft still exists). Just look at how texture loading and binding work; it's semi-arcane.

    Part of their proposal was a shading language. Naturally. However, unlike the current cross-platform ARB extensions, their shading language was "high-level" (C is high-level for a shading language. Yes, really).

    Now, Microsoft was working on their own high-level shading language. Which they, in all of Microsoft's collective imagination, called... the High Level Shading Language (HLSL). But their was a fundamentally different approach to the languages.

    The biggest issue with 3D Labs's shader language was that it was built-in. See, HLSL was a language Microsoft defined. They released a compiler for it, and it generated Shader Model 2.0 (or later shader models) assembly code, which you would feed into D3D. In the D3D v9 days, HLSL was never touched by D3D directly. It was a nice abstraction, but it was purely optional. And a developer always had the opportunity to go behind the compiler and tweak the output for maximum performance.

    The 3D Labs language had none of that. You gave the driver the C-like language, and it produced a shader. End of story. Not an assembly shader, not something you feed something else. The actual OpenGL object representing a shader.

    What this meant is that OpenGL users were open to the vagaries of developers who were just getting the hang of compiling assembly-like languages. Compiler bugs ran rampant in the newly christened OpenGL Shading Language (GLSL). What's worse, if you managed to get a shader to compile on multiple platforms correctly (no mean feat), you were still subjected to the optimizers of the day. Which were not as optimal as they could be.

    While that was the biggest flaw in GLSL, it wasn't the only flaw. By far.

    In D3D, and in the older assembly languages in OpenGL, you could mix and match vertex and fragment (pixel) shaders. So long as they communicated with the same interface, you could use any vertex shader with any compatible fragment shader. And there were even levels of incompatibility they could accept; a vertex shader could write an output that the fragment shader didn't read. And so forth.

    GLSL didn't have any of that. Vertex and fragment shaders were fused together into what 3D Labs called a "program object". So if you wanted to share vertex and fragment programs, you had to build multiple program objects. And this caused the second biggest problem.

    See, 3D Labs thought they were being clever. They based GLSL's compilation model on C/C++. You take a .c or .cpp and compile it into an object file. Then you take one or more object files and link them into a program. So that's how GLSL compiles: you compile your shader (vertex or fragment) into a shader object. Then you put those shader objects in a program object, and link them together to form your actual program.

    While this did allow potential cool ideas like having "library" shaders that contained extra code that the main shaders could call, what it meant in practice was that shaders were compiled twice. Once in the compilation stage and once in the linking stage. NVIDIA's compiler in particular was known for basically running the compile twice. It didn't generate some kind of object code intermediary; it just compiled it once and threw away the answer, then compiled it again at link time.

    So even if you want to link your vertex shader to two different fragment shaders, you have to do a lot more compiling than in D3D. Especially since the compiling of a C-like language was all done offline, not at the beginning of the program's execution.

    There were other issues with GLSL. Perhaps it seems wrong to lay the blame on 3D Labs, since the ARB did eventually approve and incorporate the language (but nothing else of their "OpenGL 2.0" initiative). But it was their idea.

    And here's the really sad part: 3D Labs was right (mostly). GLSL is not a vector-based shading language the way HLSL was at the time. This was because 3D Labs's hardware was scalar hardware (similar to modern NVIDIA hardware), but they were ultimately right in the direction many hardware makers went with their hardware.

    They were right to go with a compile-online model for a "high-level" language. D3D even switched to that eventually.

    The problem was that 3D Labs were right at the wrong time. And in trying to summon the future too early, in trying to be future-proof, they cast aside the present. It sounds similar to how OpenGL always had the possibility for T&L functionality. Except that OpenGL's T&L pipeline was still useful before hardware T&L, while GLSL was a liability before the world caught up to it.

    GLSL is a good language now. But for the time? It was horrible. And OpenGL suffered for it.

    Falling Towards Apotheosis

    While I maintain that 3D Labs struck the fatal blow, it was the ARB itself who would drive the last nail in the coffin.

    This is a story you may have heard of. By the time of OpenGL 2.1, OpenGL was running into a problem. It had a lot of legacy cruft. The API wasn't easy to use anymore. There were 5 ways to do things, and no idea which was the fastest. You could "learn" OpenGL with simple tutorials, but you didn't really learn the OpenGL API that gave you real performance and graphical power.

    So the ARB decided to attempt another re-invention of OpenGL. This was similar to 3D Labs's "OpenGL 2.0", but better because the ARB was behind it. They called it "Longs Peak."

    What is so bad about taking some time to improve the API? This was bad because Microsoft had left themselves vulnerable. See, this was at the time of the Vista switchover.

    With Vista, Microsoft decided to institute some much-needed changes in display drivers. They forced drivers to submit to the OS for graphics memory virtualization and various other things.

    While one can debate the merits of this or whether it was actually possible, the fact remains this: Microsoft deemed D3D 10 to be Vista (and above) only. Even if you had hardware that was capable of D3D 10, you couldn't run D3D 10 applications without also running Vista.

    You might also remember that Vista... um, let's just say that it didn't work out well. So you had an underperforming OS, a new API that only ran on that OS, and a fresh generation of hardware that needed that API and OS to do anything more than be faster than the previous generation.

    However, developers could access D3D 10-class features via OpenGL. Well, they could if the ARB hadn't been busy working on Longs Peak.

    Basically, the ARB spent a good year and a half to two years worth of work to make the API better. By the time OpenGL 3.0 actually came out, Vista adoption was up, Win7 was around the corner to put Vista behind them, and most game developers didn't care about D3D-10 class features anyway. After all, D3D 10 hardware ran D3D 9 applications just fine. And with the rise of PC-to-console ports (or PC developers jumping ship to console development. Take your pick), developers didn't need D3D 10 class features.

    Now, if developers had access to those features earlier via OpenGL on WinXP machines, then OpenGL development might have received a much-needed shot in the arm. But the ARB missed their opportunity. And do you want to know the worst part?

    Despite spending two precious years attempting to rebuild the API from scratch... they still failed and just reverted back to the status quo (except for a deprecation mechanism).

    So not only did the ARB miss a crucial window of opportunity, they didn't even get done the task that made them miss that chance. Pretty much epic fail all around.

    And that's the tale of OpenGL vs. Direct3D. A tale of missed opportunities, gross stupidity, willful blindness, and simple foolishness.

    Did you have this written up somewhere, or did you write if off the top of your head?

    @Kristofer: I don't know if you can call it "off the top of your head" for something that took an hour or so to compose, but I didn't have it written up somewhere.

    This might be for the wrong reasons but OpenGL is the single reason that makes Counter-Strike 1.6 still so attractive to professional FPS gamers. DirectX has terrible pixel-perfect aiming (atleast all the engines with it, do) and it killed the e-sport possibilities with new FPSs without the (now old) OpenGL. *Disclaimer: I ran more than 400 CS tournaments from 2002 to 2006.

    @F.Aquino: So you're willing to attribute this to the rendering system that FPS games use, rather than the engine itself? Even though CS is based on Half-Life 1, which is based on Quake? Sorry; not buying it. Granted, I don't buy the premise that new FPSs have no e-sport potential, even though there are plenty of e-sports tournaments centered around newer FPSs. They may not hold the same attraction that CS does to _some_ players, but don't make the mistake of thinking that those players make up all of FPS e-sports.

    Wow. I don't even care about most of this stuff and it's *still* a great read!

    Awesome history, you just missed a little bit at the start, where Microsoft was part of the OpenGL Architecture Review Board, before leaving to work on DirectX. http://www.opengl.org/about/arb/meeting_notes/notes/minutes_12_94.txt

    @greyfade: GL 4 doesn't appeal to developers anymore than GL 3 did. I stopped where I did because nothing really changed. Yes, GL 4 exposes D3D 11 features, but you could just use D3D 11 to get those. Nothing has changed that has helped or hurt OpenGL's market position. Think of GL 4 as the ARB treading water.

    @Clinton: I mentioned that Microsoft was once on the ARB. But they didn't exactly "leave to work on DirectX"; they were part of the ARB until around the time OpenGL 2.0 came out. By then, D3D was approaching version 9. Indeed, I wouldn't be surprised if they were part of the reason why the ARB stayed away from shaders for so long. Though I think I'll add a paragraph explaining that speculation.

    @Phelios: This *is* a summary. :)

    @greyfade: no way!! then I will need a summary of the summary as TheBigO said

    Fascinating. It's always great to read the history of major software components from someone in the know.

    @Nicol Bolas It could be based on the pac-man engine, it doesn't matter. You won't be able to show me a single new game that you can "camp" a pixel, thus killing the type of precision needed at the extremely competitive level, or better, 1.6 is still well alive and all that came afterwards failing one after the other. The gfx card manufacturers, intel, amd, monitor manufacturers, they all pushed us to move, still do, people have adapted but it will never feel the same.

    voodoo2, which predates TNT, had dual texture processing units - see http://en.wikipedia.org/wiki/RIVA_TNT

    @F.Aquino: My point was that you're misassigning blame. The API used to render has _nothing_ to do with aiming precision. It is the _engine_ code that allows this. For whatever reason, Quake1's engine allowed you to do this, while other engines don't. If game developers wanted to allow players to "camp a pixel" (whatever that means), they would code it into their current engines.

    Wow, that was an impressive write up. I do have a question though; why are you still using OpenGL if it seems to have so much problems? Is it due to cross-platform compatibility?

    @DMan: Cross-platform compatibility is pretty much the only real strength OpenGL has over D3D; in most other repsects, they're close enough to not matter much. Also, most of OpenGL's problems are in the past; the problem is that the past is often why people use something in the present. And that's what my article was showing: how screwups in the past influenced people to pick D3D.

    `Which they, in all of Microsoft's collective imagination, called... the High Level Shading Language (HLSL).` LOLled for more than 6 minutes at this.

    It's a flipping shame that you only have about 1k rep on this site. Stupid community wiki. Amazing summary of the OpenGL/D3D battle. I think Glide had some impact there too (remember 3DFX having the edge on the market before their Sega Dreamcast snafu). Most people presumed they would become the SoundBlaster of the GPU market.

    @ApprenticeHacker same thing I did when I read what `hal.dll` was. A machine I was fixing once, had a boot issue that stopped it at loading `hal.dll` I first thought it was a joke on `HAL 9000`, imagine my disappointment.

    What's particularly odd about this is that D3D clearly subscribes quite heavily to "worse is better" (whether that's accidental or intentional is an interesting question) whereas the design of OpenGL is very much an "everything *including* the kitchen sink (with extra kitchen sink)" approach, with lots of bloat, legacy cruft and gubbins. Given the Windows vs Unix heritage of each, one would almost expect the opposite to have been the case, yet it's not.

    by far the best answer in stackoverflow/stackexchange. Though I disagree with the fact that opengl is a failure. It has a very good card , mobile devices where opengl rules supreme making opengl the undisputed king of Graphic libraries. Still you can have all my upvotes.

    This is a truly great answer. I have just one additional request. **Remember WinG?** How does that fit into all this?

    @MichaelKjörling: WinG was really before all of this. It was more of a precursor to DirectDraw than D3D.

    Do you think Valve will succeed with their Steam Box if it runs Linux given all the disadvantages of OpenGL you just mentioned?

    I need to put a few things straight which you left out. the OpenGL ARB did not survive in it's OpenGL 2.0 to 3.0 form. It was a messy ursurption of many dissatisfied board members which ultimately ended with a much more capable and agile ARB. OpenGL by no means is dead. It is used on every mobile and on all macs and linux in existence, which owes partly to the fact that out of the ashes came OpenGL ES 2.0.

    @FlorianBösch: I never claimed that OpenGL was "dead". As for any "messy ursurption" that may or may not have happened, you should provide some evidence of that. Most of that stuff would be internal to the ARB, so it's not likely we will know what was going on behind closed doors. I only stated what is verifiable information. The ARB was adopted by the Khronos group; that's verifiable; whether this was "ursurption" or not is s different matter.

    It is a great read, but I think is a little off-topic, as it really does not really go to the original question.

    @NicolBolas: you think you can add year numbers into context? Not everyone knows which hardware was releases when, etc. I think it would be an even greater addition. Thx for considering!

    I registered just to congratulate the guy who took the time to write this as part of a response.

    @asattar This was back then. It's history now. **Nowadays, OpenGL is better than DirectX. See http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX for lots of reasons you should use OpenGL.**

    @Jop: Considering how much misinformation is contained in that obvious propaganda piece, I would suggest avoiding that article. How much faster OpenGL's draw calls are than D3D10/11's is very debatable these days; the article cites an NVIDIA PDF from ***2006***. Saying that Microsoft leaving the ARB is part of a "FUD campaign" is an outright lie. And the rest of the "FUD" stuff is alarmist, anti-Microsoft BS. D3D 9 came to dominate the gaming landscape all on its own, well before the Vista release.

    @Jop: This answer was written in 2011 and that article was written in 2010.

    @Jop: ... and? The difference between 270FPS and 303FPS is approximately... 0.4 milliseconds. That's *barely* more than a rounding error, and in a game that was actually *using* the hardware (rather than one that throws away 4 out of every 5 frames), it would be an insignificant difference. In short, the performance of D3D nowadays is reasonably comparable to the performance of OpenGL.

    @Jop: Also, I would like to point out that Valve gets to *cheat*, because they can basically say to the IHVs, "here's how we're going to render; go make this optimal in your drivers." Other programs don't get to say that. The article you mention even points this out, saying that their work has caused driver changes. They *claim* that this "benefits all games", but the reality is that it only benefits games that render the way that they do.

    In 2010-2013, DX is falling due to falling of Windows on mobile. And GLES is arising by Apple and Google.

  • I found it strange that everybody's focusing on user base, when the question is 'game developers', not 'game editors'.

    For me, as a developer, Linux is a bloody mess. There are so many versions, desktop managers, UI kits, etc... If I don't want to distribute my work as open source, where the user can (try to) recompile so it fits his unique combination of packages, libraries and settings, it's a nightmare!

    On the other hand, Microsoft is providing (most of the time) incredible backward compatibility and platform stability. It is possible to target whole range of machines with one closed-source installer, for instance computers running Windows XP, Vista and 7, 32 and 64 bits flavors, without proper DX or VC redistributables installed, etc...

    One last thing, PLEASE EVERYBODY ON THE INTERNET STOP COMPARING OPENGL AND DIRECTX! Either compare Direct3D vs OpenGL or don't do this. DirectX provides input support, sound support, movie playing, etc etc that OpenGL doesn't.

    "For me, as a developer, Linux is a bloody mess." Indeed! I work in an environment with Fedora and Ubuntu. We have problems even between just those two. (I must add I'm a Linux fanboy.)

    @jv42: This seems a common misconception: Almost all of those versions and desktop managers and UI kits, etc. are pretty irrelevant to getting a game up and working. If your game (as shipped) depends on more than libGL, libX11, and libasound (or libopenal or libao or liboss), you're doing something wrong.

    If you just ship a Linux binary that works well and your game/program/whatever is popular, most likely Linux users will take care of the rest. You get bonus points if you ship a 64 bit version as well.

    @greyfade nice thing to say, but I'm still waiting for a single game on linux with high production values (really, I am, I'd love to be able to play games and give up windows). So far I haven't seen any. Another key point about windows is that the users will (sometimes) *pay* for the games. I really don't know anyone who pays for any software on linux (and I don't either). But I do pay for video games.

    @TM: I don't understand why you're singling me out - I'm not saying anything beyond the fact that developers seem to be confused about what libraries actually matter for writing games. That said, I've not seen a Linux user that didn't pay for the native games they play, when given the chance.

    @greyfade "If your game (as shipped) depends on more than libGL, libX11, and libasound (or libopenal or libao or liboss), you're doing something wrong." -- And then you link to libao-0.0.4alpha, and your user has libao-0.0.5beta and nothing works.

    Please post your answer in multiple responses, so we can upvote more than one time.

    @quant_dev: `libao` was a bad example: it's GPL, so you couldn't use it in a commercial game anyway. But regardless, if you depend on an alpha version, that's your fault - ship it with your game. I'd say the same for any other LGPL dependency.

    @TM: I don't understand. I ran UT, UT2004, and Quake 3 on Linux, and they ran just great! (OK, UT2004 had a problem where it would run slower and slower on each map change, and I eventually just rebooted into Windows, but...) I miss those days.

    @quant_dev: That's the thing. If libX depends on libY and on libZ, then consider you've made a poor choice. Most developers seem to get on well with shipping `libSDL` and `libopenal` and using the system `libasound` and `libGL` with several other things statically compiled and no other dependencies to speak of. If you've chosen a library that depends on a whole bloody distribution, you're doing it very, very wrong. Actually, I'd say the same thing if you were on Windows.

    Packaging up a binary so that it runs on all Linux distributions isn't that hard. Heck, the Blender developers do it with each release. There's only the one (well two, 32 bit and 64 bit) Linux release package of Blender.

    @TM I pay for games on Linux. Ever heard of the Humble Bundle? I never pay less than $10. If Steam and games ever get ported to Ubuntu (a possibility Valve is starting to explore), I will pay for AAA games as well. Yes, AAA prices for AAA games on Linux. Us Linux fanboys aren't cheapskates.

    Saying Linux is a mess due to all the versions is wrong. If you are coding for Windows then it's no different than coding only for Ubuntu. You would have the same stability of a single distro as you do for the "single distro" called Windows.

    @Rob you fail to make a point there. There are indeed many distros for Linux, if you target only one of them, you can't say your game 'runs on Linux' now can you? And the stability of Windows as a target platform is top notch, you can run many 20+ years old games on the latest Windows.

    @jv42 - You miss my point. His complaint is that each distro is different so Linux is hard to target and a mess. My point is he's targeting just one "distro" called Windows and he can accomplish the same thing by targeting just one "distro" called Ubuntu or whatever his preferred distro would be. He is claiming Linux is a mess and the problem when it is not.

    @Rob My point is: without in depth knowledge of Linux, the perception of it is a real mess: there are many distros, with different package managers, preferred UI kits, etc... which fail to provide a clear path when one wants to develop on it. The reality is not so bleak, but that perception is still based on facts.

    jv42, you never actually tried to deploy something on linux, did you? it’s dead easy, as @greyfade said: ship your game with the known working versions of all libs except some system libs like libasound and libGL, and everything will work forever.

    @flyingsheep Please read the question and my comments. We're not talking about facts but perception.

    @flyingsheep And in fact I'm currently working again with Linux, and it's messier than I thought when I was distanced from it. For instance, 32 bits software on 64 bits OS is complicated on Linux (as in: not always working out of the box) vs Windows where WOW takes care of the details for you.

    i’m using linux exclusively for ~4-5 years and not once have i stumbled upon a 32bit-vs-64-bit issue. stuff just works out of the box, that’s my honest experience.

    @flyingsheep As a developer? Also, I've just had issues using a prebuilt package (Firefox official!) on a 64 bits system.

    yes. i use the fallback sequence: 1. get from official repos, 2. get 3rd-party packages for your packaging system, 3. get tarball and create package from it 4. repeat all steps with 32 bit version. i currently have 2 32 bit games on my pc (i used to have more)

    Sorry for replying to a 2 year old post but: Nonsense. Porting a game to Linux is _very_ easy if you use a good cross-platform game engine, and it will work on most editions of Linux available.

  • It's because there are more Windows users on the planet than Linux and Mac. The truth is that people make things for whichever has the biggest market.
    The same goes with mobile phones: Android and iPhone have awesome games but Windows Mobile and Symbian don't...

    It has nothing to do with adoption, PCs can run openGL and Linux just as they can run windows.

    @Mahmoud Hossam: That's not true. Using OpenGL doesn't mean that your entire application will just magically work under a *nix environment, it means that graphics engine will (and even then it doesn't mean there aren't quirks on (for example) Windows that don't exist on Linux and vice versa). That's not the whole basket and there is definitely an appreciable cost of maintaining their code for multiple platforms.

    @Ed Nothing works "magically", portability comes at a cost, always.

    @Mahmoud: Exactly. And if you're not going to bother porting to Linux/Mac OS anyway, then why not use the native library? DirectX is *much* better supported on Windows than OpenGL is.

    @Dean I'm not talking about windows-only games at all.

    @Mahmoud: the whole premise of the question is "why do developers prefer Windows". Answer: because that's what gamers use. There's no point porting to Linux/Mac if it only makes up 2% of your market, and in that case, there's no point using OpenGL if you're not going to port anyway.

    @Dean gamers use windows because they have to, if developers switched to whatever platform there is, gamers will follow.

    @Mahmoud: Sure, it's Catch-22. I'm not saying Windows is *better* or anything like that. But if the question is "why do game developers use Windows", then answer is "because that's where the gamers are". There's no technical reason why you can't develop a game for Windows/Mac/Linux.

    @Mahmoud, its not the game developer's job to get people to switch to Linux.

    @Dean actually, I think gamers choose windows because developers develop games for it, but what you said is also correct :-)

    @GrandmasterB it's his job, but he's capable of doing so, I've been a gamer for quite some time and I know how gamers think.

    "the same goes with mobile phones: android and iphone have awesome games but windows mobile and Symbian don't..." Well, first off Symbian till has a way bigger market share than Android and iPhone, so that goes against your theory. Secondly, WP7 has access to the whole XBLA collection, which is pretty impressive

    this answer is simply about how many people use what? its difficult to do business in niche market. so if u have an option to reach more people y not choose that... this answer is about business not about the technologies used. do you know some of the awesome games for windows cal also be run under WINE on linux...

    @Mahmoud You are a gamer, but you are also a techie. That is why you **think** gamers would follow if games started being developed for Linux. You understand Linux and probably care for it, most gamers would abandon PC gaming and go to an xbox or playstation.

    @Marcelo: Most gamers *have* abandoned PC gaming and gone to xbox or playstation.

    Please improve your post with proper capitalization.

    @John 10+ million of World of Warcraft players would like to have a word with you. And this is just one game.

    I've heard the Windows version of World of Warcraft runs smoother on Linux than on Windows (using Wine, the native Win32 API reimplementation).

  • Because Windows has over 90% market share, and Linux (since you specifically asked about Linux) has a reputation for having lots of users who don't like to pay for software. Whether or not that's true or how true it is is irrelevant; the perception is there and it influences people's decisions.

    If developers use OpenGL it will support both Windows and linux so it will be marketing advantage actually to attract Linux users who are willing to pay and are using Linux because they believe it's better.

    Even using OpenGL there are costs in developing and testing cross platform which the Linux market doesn't justify. Directx is also (sadly) a better platform for PC gaming at the moment than OpenGL - there are very few new games on PCs being built on OpenGL.

    Cross platforming isn't as straightforward as "just code for OpenGL".

    That's not true actually, that Linux users won't pay for software. The Humble Indie Bundle (a bundle of games you can get for any amount of money you wish to pay) has been done twice now and every time it showed Linux users paying more than Windows users for the bundle.

    @Htbaa Of course they paid more, they were desperate, its probably the only games they get to play on their OS.

    @Htbaa True, but it's also pretty telling that the majority of the money made was from Windows users. They is simply more of them.

  • Because Windows is backed by a huge organization, that more than a decade ago decided they want game development to happen on their platform.

    This wasn't true for the Mac, and it isn't true now. Not even for iOS. Apple doesn't provide tools for iOS game development. But it's a huge market (there's more iPhones out there, than there was PCs in 1995) with relatively little competition, so people do it anyhow.

    As for Linux, there's not even some sort of central institution, that could set any sort of priorities. The direction in which Linux is going, is more less determined by a bunch of very good, yet slightly unworldly programmers.

    To create a PC game today, you need a lot of 2d/3d artists, game designers, scriptors, actors, testers and what not. As to the actual programming, you might simply use an actual game engine (CryEngine, Unreal Engine, Quake Engine, Source Engine). So you might be able to do the whole thing without any actual programmers.

    Because of that, and because of the nature of businesses, programmers have little say in which platform is chosen. And typically, managers look for support, which is something Microsoft claims to offer, and to deal with things that are somehow seizable to their thought patters, which open source is not.

    For that reason, most commercial end-user software development is done on windows.
    I work for a company, that creates flash games, and is thus not bound to a particular platform. However, we all develop on windows, because most of the tools we use aren't available for Linux.

    "So you might be able to do the whole thing without any actual programmers." You forgot the sarcasm.

    @AllonGuralnek: No sarcasm there. In classic game development of big games, programmers will create engines and means to provide content and behavior (through visual editors or actual scripting) and then game/level/mission designers will use those means. If you buy a working and sufficiently powerful engine, you can basically cut out step one.

    Do you have a specific example of a reasonably notable game that was created without writing a single line of code?

    @AllonGuralnek: I didn't say anybody would be creating games without *code*, but without *programmers*. You don't need to be a programmer to create an entirely game-changing mod. DotA is the most popular one I can think of off the top of my head.

    People who write code are programmers. You don't have to be an expert - if you've written a program that someone else finds useful, you're a programmer as far as I'm concerned. Also, DotA (for Warcraft III), is neither a mod nor game-changing. It's simply a map, with the same graphics, gameplay mechanics, controls and rules as the original game. It simply introduced certain constraints, narrowed the scope by removing some mechanics, changed the configuration of some elements and repurposed others, which happened to be appealing to many and became a popular sub-sub-genre (like Tower Defense).

    @AllonGuralnek: No. People who write code are people who write code. Level designer are required to have a certain understanding of scripting. Much programmers are often required to have a certain amount of management skills. It doesn't make the first one a programmer, nor the second one a manager. Also your assessment of DotA is wrong. Firstly it's entirely game-changing, turning an RTS into a new genre and secondly it is considered as a separate game by many eSports leagues including the ESWC

  • As some have already said, the most important part is the user-base. 95% of PC users use Windows. PC gamers use almost exclusively Windows. Even these who use Mac or Linux most often run Windows games through some virtualization or emulation (with very, very few exceptions).

    But demographic is not everything. I wouldn't underestimate the part that Microsoft is doing to make the platform more attractive for game developers. Basically you get fully featured set of tools for free, most importantly XNA Game Studio. This allows not only development for Windows, but also for Xbox360. And with the latest edition even for WP7 phones. Obviously since it's Microsoft tool, it uses DirectX, not OpenGL.

    Note for any time-travelling readers: As of April 2014, XNA will be officially dead. The last release (4.0) was published in 2010 and won't be seeing a new version between now (2013) and it's sunset.

    Another note to any time-traveling readers: Windows 8 (and up) development will no longer be free in the future.

  • Ewwww, I don't. I use Linux almost exclusively. I dual-boot to Windows to make Windows builds, and use the Mac for the Mac builds, but that's it.

    The trick is a cross-platform framework we've developed over the years. Our games are built on top of that, and behave identically in Linux/OpenGL, Mac/OpenGL, and Windows/Direct3D (and soon in iOS/OpenGL).

    Admittedly my company doesn't do AAA titles, so it may not apply to these, but we do make top casual games (see website - CSI:NY, Murder She Wrote and the two upcoming 2011s are examples of titles using important licenses, The Lost Cases of Sherlock Holmes 1 and 2 were quite successful as well)

    I wouldn't give up gedit+gcc+gdb+valgrind for anything else.

    gedit is underrated for programming

    @Alexander, underrated doesn't even begin to explain

    Sorry, I did give up gedit in the end. I now use gvim and I'm much happier :)

  • The answer is obvious. The objective of writing a game is to make money. More end users run Windows, therefore there is a bigger market and you would expect to make more money from a Windows game than a Linux game. It's that simple.

    If ever you ask yourself the question 'Why does someone do...', just remember that money makes the world go round.

    Yes but you can make a cross platform game and get more than just windows users ;)

    Being cross-platform isn't everything. If the number of extra users you get is a relatively low percentage then you need to balance the extra development cost and ongoing support cost against what extra you're going to get in from them and make an informed decision based on actual hard data. There's no globally right or wrong answer to that one, but there is an answer that's right or wrong for each individual program, and what's right for one program may be wrong for another.

  • Tools, tools, tools.

    That's what it comes down to. Develop on Windows and you get access to some of the best development tools on the planet. Nothing comes even remotely close to Visual Studio's debugger, the DirectX Debug Runtimes are awesome, PIX is awesome, and comparable equivalents just don't exist on other platforms/APIs. Sure, there is some good stuff there; I'm not saying that the tools on other platforms are bad, but those that MS provide are just so far ahead of the pack (honourable exception: Valgrind) that it's not even funny.

    Bottom line is that these tools help you. They help you get stuff done, they help you be productive, they help you focus on errors in your own code rather than wrestle with an API that never quite behaves as documented.

    PIX *is* awesome. Debugging shaders by clicking a bothersome pixel and seeing what happened is great!

  • So I've gone over all these answers, and as a game developer who has code on console games that have been on Walmart shelves, I have a very different answer.

    Distribution.

    See, if you want to be on a Nintendo console, you have to get Nintendo's permission, buy from Nintendo's factories, pay Nintendo's overheads, negotiate with Walmart, deal with warehousing, you need money up front to manufacture, to print boxes, to ship, to do all the insurance, et cetera.

    If you want onto the XBox, sure there's XBLA, but you still need Microsoft's blessing, you have to wait your turn in line, it's tens of thousands of dollars just to release a patch, etc.

    On iOS, you still need Apple's okay, and they can (and do) capriciously pull you.

    On Steam, you still need Valve's permission or greenlight, and lots of money.

    .

    On Windows? You set up a website and a download button.

    .

    I'm not saying the other platforms aren't valuable. But there is so *much* horrible stuff going on when you're trying to develop a game, that to me, the promise of being able to just slap a binary on a site and focus on the work - at least to get started - really lowers a lot of potential failure barriers.

    "We can do the XBLA port later, when things are stable" mentality.

    And to a lesser extent sure this is fine for Linux too, and if seven customers is good, you can start there.

    But Windows has three massive advantages: genuinely open development, genuinely open deployment, and a very large, very active customer base which is interested in quirky stuff.

    It's hard to imagine where else I'd rather start.

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM