About Gallery SF.net Project Bug tracker Downloads Forums Manual Wiki DevBlog News

Python3 support comes to Vegastrike

Posted by safemode on February 24th, 2013

In preparation for what will ultimately become 0.6, after many hours and an angry girlfriend or two, we give you python3 functional vegastrike along with updating boost to 1.53.0 (in-tree).

Get vegastrike’s repo at


and the data dir at


You need python3 to run this and the default option for system boost is reversed from normal trunk.   So disable the option if your in-system version of libboost_python is not up to 0.5x.y and doesn’t have a python3 target.

Right now we need as many people building and working off of the branch as possible to iron out any remaining bugs and issues.   It seems to be much less forgiving of code errors so that should help track down problems and get things even more stable than in the 0.5.x release.


It should work with your current savegame data, the python files will be re-compiled automatically.

This has only been tested on unix.  The Win32 side of things needs to update their build files to point to 0.53.0 for in-tree boost.   Though, that’s all they’d have to do.  There are no changes in defines needed or differences in filenames (added or removed) from the old 0.45 version of boost that was in the repo.


Those mods out there wishing to port to python3 need to first process their files with 2to3 and an indent cleaner.  Then go through and any non-loop uses of “range” need to be wrapped in list() to get the same functionality..  Also, they need to go through all their uses of division and ensure any int/int divisions are handled with // to keep the same behavior.  There aren’t many other issues that VS comes across other than that…. but little things may turn up so the porting process is a handle-as-needed thing.    Keep in mind, the current rev  13522 is identical in functionality to the 0.5.1r1 release but updated to handle python3 and boost 1.53.0 so any mods working with that but just want to run for the foreseeable future without needing users to downgrade libs can use this.  0.6 will contain many other changes that will certainly break 0.5.x mods .   

Vega Strike splits the Atom

Posted by klauss on May 31st, 2012

Meaning… if you run it in an Atom, be ready to watch it break in half due to the load!

Vega Strike has always been GPU-hungry, there’s no denying. Ever since the 0.5.0 release, it has also been RAM-hungry (or, more specifically, ever since we had to retire soundserver). But, as those that tried may have noticed, and as those that read the forums may have read, this description isn’t entirely accurate on the netbook front. There, rather than hungry, I’d call it starving.

Now, I’ve only toyed a little bit on an N5xx, so this is far from a thorough report. But, since the devblog has been rather quiet lately, I started writing.


Right before building for the N5xx, I thought, I need proper optimization settings. There’s no point in benchmarking the game if the build won’t be optimized for the (rather different) hardware. Luckily, gcc “recently” added a nifty “native” optimization option - it just detects the CPU being used, and optimizes for that.

Building on the dual-core HT netbook was great. That little machine managed to build in about half an hour, which is what it took to build VS on a P4 1.7Ghz I had, or a P3 1Ghz - they both took the same time. Given that the netbook consumes a whopping 8.5W, I was surprised. That is undoubtedly the “quad-core” effect - I had heard HT on these architectures worked a lot better than in newer ones, say Sandy Bridge, and I could go lengthy about why - but suffice it to say: it’s true. “make -j4″ really paid off here. Beautiful.


This little thingy I borrowed from my sister has an astonishing 2G of RAM on it. It’s the maximum the chip can handle, for those that don’t know, so you don’t get a bigger Atom. VS can  run with 1G and some swap, so 2G was plenty. Still, I could feel the poor thingy ask for mercy. Atoms, small as they are, are 64-bit. That makes VS use some more RAM than it would on 32-bit, and on a 2G system with no video RAM whatsoever, it was pushing it.

I think the worst thing is that APUs (and other onboard GPUs too) have to share system RAM with the CPU - they have no dedicated RAM, and that’s a big disadvantage for OpenGL. OpenGL has to keep a copy of all textures in case they have to be swapped out of VRAM, and APUs, which have no RAM but do have reserved RAM are no exception. So all textures use up memory twice - once in system RAM, and once in video RAM. On a 2G system, you feel it. A 1G system would be swapping and stuttering constantly (I’ve tried), and if you don’t decrease texture resolution, even crash.

Which brings me to N5xx APU’s limit of 256MB texture space. This is plenty, but since intel doesn’t support DXT compression (they market it as DXT de-compression, which means the driver expands the texture when loading it into RAM, a big lie that retains none of the benefits of DXT), those 256MB run out quickly.

Full-size planet textures, for instance, use up 160MB on their own. Add a few stations and ships, and texture swapping is pervasive. In fact, I’ve had it crash on me when textures were at full resolution - especially when looking at earth, which has the biggest, bestest and meanest texture set - because if an object’s textures don’t fit in that limit, the driver will kill VS.

So… disabling faction textures and lowering texture resolution was a must. I must say I didn’t notice a performance impact - it was mostly about crashing than performance.

Shaders and whatnot

Now, surprisingly, the on-chip GPU (APU for the knowledgeable folks) on those tiny processors is quite decent on per-pixel stuff. It’s not super fast (or fast for that matter), but it’s surprisingly capable given the low power budget (TDP for the geeks). It means, it can run all the shaders that can run on other intel onboards, albeit slower.

A lot slower. At the rather humble resolution of 1024 x 600, I could get maybe 4 fps looking at Serenity. But, check this out, the pixel pipeline wasn’t the bottleneck! These intel APUs have poor hardware vertex shaders. So much so, that for a while I thought they were running on the CPU!

All of the 2 shaders units run at 200Mhz, which is half the clock rate of the CPU. But, more importantly, I think full-precision floating point arithmetic must be working in scalar mode rather than vector mode (only 1 operation per cycle, instead of 4), because the difference in speed compared to the pixel shaders is astonishing.

Update it seems that they do actually  run on the CPU, according to tech report: “Integrated graphics processors typically lack dedicated vertex processing hardware, instead preferring to offload those calculations onto the CPU. As a unified architecture, the GMA X3000 is capable of performing vertex processing operations in its shader units, but it doesn’t do so with Intel’s current video drivers. Intel has a driver in the works that implements hardware vertex processing (which we saw in action at GDC), but it’s not yet ready for public consumption.” This means that, coupled with the rather slow CPU, vertex shading is severely underpowered. We can only hope driver improvements will revert this situation.

This situation was unheard of before APUs came inside netbooks. It was always the case that the pixel pipe would be the bottleneck, and if you wanted to accelerate stuff you pushed some calculations out of the pixel shaders and into the vertex shaders. With a few thousand vertices per object vs a few million pixels, it was a clear win. Not anymore - not on netbooks - the underpowered vertex shader can’t keep up with a tenth of the workload the pixel shader can handle, and all our shader optimization just doesn’t make sense anymore.

For instance, many shaders perform multiple passes, in order to avoid computing the expensive pixel shading on occluded pixels. This is such a dumb thing to do when vertex shaders are the bottleneck, that I’ve been considering adding a “Netbook” shader setting that will disable that. It could possibly multiply FPS 2x or perhaps even 3x. Some other “optimizations” would beg revising too, so this will take time.

And the bug in Vega Strike

If this wasn’t enough, bugs in VS contributed to the slowness. Especially a rogue high-quality shader that slipped into low-detail modes. Other intel GMAs handled it pretty well, in fact, but not this APU. After fixing that, things improved a great deal. I still get unplayable FPS when running shaders, but I can see a small light of hope - maybe if I can rebalance the shaders with the underpowered vertex pipeline in mind, maybe it might work.

Python universe

That’s another resource hog for the Atom. This lowly processor isn’t up for the task of running VM bytecode, which is what Python runs on. Java and Dalvik, two other technologies that run VM bytecode, have a JIT - a module that spits out optimized machine code to replace the Java bytecode, which makes it fare a lot better in the Atom, as evidenced by the abundance of Atom hardware running android.

But Python has long wished one, but not gotten it. So it still emulates the running program by reading the bytecode and performing the operations in a very Atom-unfriendly way.

The Atoms’ simplistic architecture isn’t well suited to run this kind of generic, utterly suboptimal code. It shows. When python scripts start running in VS, the stuttering is immediate. Spawning ships, a very Python-heavy part of VS, is worst.

I don’t see a way to fix this, other than moving most Python universe simulation stuff to a thread. But VS is years from becoming thread-safe in that way, so… bad mojo. For now, all I can do is try to optimize the python scripts. I can’t make Python fast on the Atom, but I can probably use better algorithms to do less work in Python. I’ve been slowly groking through Python code trying to find obvious places to optimize, but I haven’t been able to measure anything yet.


The conclusion is I have to do more testing. There’s the new N2800’s which run a lot faster, and features a tile-based GPU. I have absolutely no clue how this GPU will fare, but I can imagine it will pose its own challenges. Even on the N5xx I should be able to streamline shaders a bit and perhaps optimize python scripts. Hope is not lost yet.

0.5.1 Beta1 Released

Posted by klauss on March 9th, 2011

Very little to say other than :D and what has been said in the General Announcement post.

Now, we need beta testers!

Tools of the Trade

Posted by pheonixstorm on January 18th, 2011

Recently I put a call out on sourceforge.net for help building the new toolsets for Vega Strike. I must say I did not expect to find so much interest, especially in such a tiny barely known project that lacked.. well, everything. So far I have at least 4 (yes FOUR) people who are either looking over the vegastrike data or actively working on new tools.

What are we looking at you ask? Well, if you haven’t seen it yet I have a demo of A tool over at the tools project http://sourceforge.net/projects/ppueditor Yes i know… the name of the link does not reflect its current status. I originally started the project as a means to edit the files for Privateer but later decided that the tools would be server the community as a whole.

Go download the demo (windows) and its update. Linux users, the current svn revision is bugged and wont compile. I am trying to correct this by finishing the autoupdate code. Qt is being tricky about its networking calls… Enjoy the tools and make sure to visit the forums and post your comments!

Planets, a reality of the hard working

Posted by klauss on September 2nd, 2010

Earlier I posted about a dream I had.

Today I’m posting about a project I have.

Hopefully, soon I’ll be posting about the game we have.

There’s no discussion about it: my areas of expertise in game development are graphics, and sound. I did a tiny bit of sound earlier, with streaming support. I’ll be back in those realms eventually, to complete streaming support for music which won’t be immediately apparent to users but it’s an important step for the audio system. Anyway… I’ll be back with sound eventually, but right now I’ve been taking some time off sound to revitalize my mind. I did that by thinking about graphics this time :)

Yep… switching from task to task is recreational for me - it lets me come back afterwards with renewed ideas and energy. Is that weird? I don’t know… but that’s me.

So, I’ve been playing with planet shaders. I got them looking nice, as I posted earlier, and lately I’ve been hooking those shaders into the system generator. Sorry folks, but I have some bad news in that front:

It’s done.

You say bad news? Ftw?

Well, it’s bad news because for it to take effect (whenever I commit it, which I haven’t done yet) you’ll have to delete all your generated system files. Bummer. All the systems you thought you knew you know no longer.

Sadly, I don’t know of an alternative. There’s no easy way to add the shaders to existing systems, and there are tons of parameters to play with. Texture files have to be set, technique names, parameter overrides… all that is generated pseudorandomly once, and the result stored in a private folder of your system (close to where savegames are stored).

But hold on, there’s more bad news.

I’ve only done the terrestrial worlds. I’ve been working  hard to get gas giants, and although they look cool at the moment, I’m not entirely happy with either the looks or their performance right now.

So I’ll keep working on them, but I’ll commit the terrestrial planets so everybody can enjoy (and comment - feedback is the best development tool, second only to contributors). I may go ahead and tackle rocky planets and asteroids before I get the giants right… the technique I chose for the gas giants bring my GF9800GT to its knees. Knowing a lot of people play with Intel GMA, which is orders of magnitude slower, we certainly can’t afford a shader that renders a planet full screen at a blazing 10fps on that kind of hardware.

The bad news I’m talking about is that this process of deleting your generated system files will have to be repeated every time I commit new planet shaders (if you want them). Which I expect to happen more than once or twice. Sorry folks, it’s the price of progress.

With planet shaders come tons of engine improvements:

  • I finally found how to hook randomizable parameters to planets, so expect more variety than texture changes alone. I’ll probably play with other parameters that make up the looks of a planet, like cloud coloring and whatnot.
  • sRGB framebuffer support is increasingly important with the multipass techniques the planets use. sRGB framebuffers mean you’ll experience improved color reproduction and fidelity. In fact, part of the graphical appeal of the planet shots I’ve been posting come from accurate gamma correction. I’ll be transforming all the other shaders into using those accurate gamma-corrected techniques.
  • Shaders support preprocessor #includes. That sounds technical I know… but it means it will be easier to work on new shaders given that we’ll have a reusable “standard library”. In fact that’s where all the gamma-correction stuff resides.
  • The “reload shaders” hotkey was not working. Now it is :D (mostly) - I kind of needed the key to avoid having to relaunch VS hundreds of times while developing the shaders.
  • City lights and atmosphere glow is now part of a planet’s technique, and not weird hacks in system files. Crafting beautiful systems just got easier :D - the bad thing here is that they don’t work in shaderless systems. Sorry folks, but progress needs programmable shading, if you find yourself forced to disable shaders, VS will start to look uglier every release. I’ll struggle to keep it playable, though… just not pretty.

Well, I think that’s it with the features.

But those features need art.

I’m formally requesting help here… we have quite varied planet textures, but a lot of the “layers” in those planets are missing. We have tons of terrestrial worlds (forest, carribean, etc…), but they usually don’t have neither city lights nor cloud maps or normal maps!

City lights, though a bit unrealistic (in reality, the nightside looks completely black, city lights being just too faint against the bright sunny side), they add a lot of beauty to the scene. So from an artistic point of view, planets need city lights. Besides, they become a lot less faint once you fly up close to the planet (say, while docking).

Cloud maps are obviously important, just take a look at the screenshots, cloud maps are 80% of the beauty of terrestrial planets. 100% of gas giants, and 40% of desert planets. They’re important. Yet we only have one or two of good quality (high-res enough to be interesting and with the proper format). As soon as terrestrial planets become commonplace, this lack of variety will be very very evident. I’m working on techniques that, without adding tons of new cloud maps, will add some variety - but I still need help here… I need contributions in the form of varied and high-quality cloud maps.

Normal maps are in fact normal plus height maps. Normal goes in color, height in alpha. I may change the technique in order to better pack the textures (right now, normal maps cannot be compressed without loosing a lot of precision), but the fact is that absolutely no planet besides earth has a normal map. And that’s very, very bad. Rocky planets, from my musings on the subject, will base their entire looks on complex, interesting normal and height maps. Without them, rocky planets will be dull. Terrestrials can get away without normal maps, but if anyone cares to take a trip to Sol and visit earth, you’ll notice while flying up close how normal maps can actually increase the perceived level of detail when flying close to the surface.

So we need normal maps. I cannot produce them all - in fact I don’t know how to produce even one of them, all the tools nVidia and the other companies provide for authoring normal maps are windows-only :(

I’ll be more than glad to help any generous soul that sets on the task of producing any amount of normal maps for the planet types we already have in SVN. It’s not like ship normal maps. You can’t just draw a random black & white “greeble” bump map and be done, planets are actually all about topography. Of earth we have topographic maps (that’s where its normal map comes from). From mars kind of too. But for our made-up planets?

So my job is far from done, and I’m already asking for help. No wonder commercial games need millionaire budgets ;)

Dreaming of Planets

Posted by klauss on August 13th, 2010

I was playing around with gimp, the other day (photoshop for others), wondering about planets.

Browsing the net, I found tons of cool pictures.

Like this one from NASALike this one from nasa, showing the earth as it looks from the moon.

Tons of cool pictures around. It must have happened to everybody, you find one cool picture, and start googling around for more, as if it were the next fix.

So I kept on googling, and found the next:

Such gorgeous pictures made me start dreaming. What if VS looked like that?

Just imageine that. Approaching planet earth on your ship, gazing at views like these


Maybe fighting grazing the atmosphere


Perhaps loosing too…



Some day perhaps…

…on the horizon


Ok, I didn’t use gimp, at least not for anything other than converting those screenshots to jpg.



Movies… at last!

Posted by klauss on July 26th, 2010

About… er… wow. 2 years ago.

About 2 years ago I gave news about the “upcoming movies support”. It was “soon to be ready”. Well, soon meant 2 years it seems.

You all know how it goes, I guess. One gets inspired, one gets all the right ideas, starts coding, gets 90% there, then real life decides to throw you a curve ball. This time was no exception.

Curve balls come in two flavors: unexpected (or expected but unaccounted for) time-consuming real life stuff. Like school, work… girlfriend ;) … all that which threatens to seriously reduce commitment to any coding project. And, of course, unexpected (or mispredicted) time-consuming coding tasks.

Movies had both of those.

Let me recount the experience a bit.

In my last post 2 shameful years ago, I conveyed my intent not to rewrite the sound system. Really, it would be a huge task for the little time I had left - though it would certainly be a fun task, for me at least - so, in order to get things rolling out ASAP, I said, I would resist the temptation to rewrite the sound system. It turned out harder than I thought.

Getting streaming within VS meant hacking the main loop to perform either threaded or multiplexed reading from several audio streams (yep guys, I was, am, committed to supporting multiple streams playing at once - it’s pretty much required for many of the interfaces I have in mind). The existing sound system in VS had no place in the main loop, so I would have to write bookkeeping routines and data structures from scratch. The more I studied the idea, the more I realized that I would be, essentially, writing a parallel sound system from scratch.

So, eventually I said: “heck, I won’t write a sound system twice - lets start the rewrite, it won’t be less work than trying to hack streaming into this”.

So I did. I started the sound system. I drew inspiration from Ogre’s design, BTW. Thanks sinbad (I really learned a lot from Ogre).

I must say I went straight to the point. In contrast with my attempt to embrace Ogre as VS’s rendering engine, case in which I overdesigned quite a bit, more than once, in this case it was the complete opposite. I designed the minimum functionality to get what I wanted, but always leaving the door open for extensibility (and the features I really really wanted - but weren’t really really needed). In fact,I found myself refactoring things several times because the design fell short in this or that aspect. So I can’t blame the lengthy time it took to roll this out on overdesigning as with the Ogre port. No… this was just a lot of work done in really really short bursts spaced perhaps weeks or months apart.

Which brings me to the other curve ball: real life. I got busy at school. My dad got sick. I left school. My dad got better. I got back to school. I got really busy at school. In the meanwhile, I worked 12 hours a day or so. I traveled a lot too (though this part was fun :D ). I don’t know what else - oh ya, at one point I had two jobs, and no energy left (one coding, one teaching). Real life conspired in a way that I could dedicate only a few hours time, perhaps once a month or so. I’d get nice full weekend sprints every now and then, but I would waste a lot of effort trying to find out where I left. A real waste.

Then came Trac. Our friend chuck_starchaser installed trac in his server at wcjunction, and boy it helped. It so happened that I was very used to working with trac at work, so I started organizing myself just as I did at work. Suddenly, the little time I had really paid off - if I wanted to know where I had left, I would simply log into trac and check. I love trac. Ya, it’s missing a lot of features, but it’s so useful.

Then came testing time. Lipsync issues, crashes, segmentation faults, it didn’t build in windows I think at one point - minor stuff anyway. All expected, in fact. After all, who writes 8700 lines of code that work flawless from the get go in multiple platforms? who writes 8700 lines of code in two years, btw? 12 lines of code a day. That’s snail pace - I guess a lot of thought went into each line ;) (I sure hope so).

Yeah, you may have noticed I’m ashamed of how long it took me. Lets say that again: I’m ashamed of how long it took me. There.

In any case, the effort, the design, the thought that went into every step, and the sweet time it took really paid off. The final phase (testing) went as smooth as I could ever had hoped. The system isn’t flawless yet, there are issues with slow computers, there’s a lot of room for optimization, there’s still a lot of features to implement, not the least of which are threaded background loading and proper resource budget management. There are bugs. Known bugs to resolve. But the foundation is there, and it works.


So… I have to thank a few people who really contributed to this:

  • chuck_starchaser: you know… without trac I wouldn’t have gotten anywhere. Our talks also shaped the sound system in some of the pending features… with luck, in less than 2 more years you’ll see the features in all their splendor.
  • hellcatv: with the needed windows testing. I can’t even build in windows anymore.
  • sinbad: inspiring my design with Ogre’s design is no small contribution. I just hope mine is up to par.

I’m probably forgetting someone, I apologize in advance. Shenle for instance helped in getting a windows build, and even though the job was finished by hellcatv, getting it started wasn’t small either.

Now… lets put the feature to use?

Note: at the time of this writing I haven’t merged it into trunk - but it’s coming, a helluva lot sooner than the last time I said “soon”.


Vessels and Installations

Posted by pyramid on October 29th, 2008


Since the last release in April, artists have been working on modeling, texturing and integrating new space crafts and installations into the Vega Strike game.

Bringing an asset such as a vessel up to the point where it is usable is a long and arduous task. It starts with the selection of an appropriate candidate from our recently reorganized 3D Models list. The artist must consider the model type, role, and specifically the faction to come up with an adequate concept proposal contemplating the architecture and texture of the model.

Early model concept

Once the concept is accepted, the modeling can commence. While small models can consist of a single mesh, larger or more complex models can be subdivided into several sub-meshes. In addition to the larger scale architecture, a good model will convey its dimensions by adding smaller features like doors, openings, pipes, ladders, cranes and other small scale features. This process is called greebling. After finishing the model, it needs to be unwrapped to provide the outline of the shapes for subsequent texturing, which is mostly done using a paint program like GIMP. Depending on the intended material in various model areas, the textures for color, specular, and glow are painted. Once the main model is finished, marker objects for docking ports, engine thruster exhausts, turrets, subunits, and blink lights must be positioned around the model. Additional meshes for shields, smaller levels of detail (LOD), and simplified collision meshes might be provided too, though it is not a must. After export of the meshes and markers, the unit is ready for integration into the game data.

Placement of thruster markers on the Archimedes model

The integration of a unit has for long been a very tedious and error prone process mastered only by few. With the advent of modding tools, in particular the Unit Converter, which will be discussed in a separate devblog, the integration promises to become merely a task of pointing the correct textures to corresponding meshes. After conversion of the meshes from Wavefront obj format to Vega Strike internal bfxm format, we need to provide a HUD image and tweak the multitude of unit stats, ranging from the unit scale factor to the item categories that a base offers for purchase.

My intention so far was to give you a short glance on the many tasks involved in getting a new unit into the game. Nevertheless, what you were probably hoping for, namely information on models added after the last release, is what I won’t let you wait longer for.

The Bell, a communication ship

An Andolian destroyer, the Kahan

In the space craft fleets of the major factions we have excellent contributions (the majority of which are from the shipyards of our talented modeler and texturer Fendorin where not mentioned otherwise, but also other artists noted besides the models):
* Archimedes
* Bell
* Ct1000, Ct2000, Ct3000 (by rivalin)
* Derivative (by Deus Siddis)
* Determinant (by Etheral walker and Nózmajner)
* Emu
* Jackal (by Oblivion)
* Kahan
* Knight
* Tridacna
* Xuanzong


Rlaan Xuanzong

We had also updates on already existing models in terms of improving the mesh and/or textures:
* Entourage
* H496
* Mk32
* Regret
* Vigilance

Vigilance wallpaper

Regret - Shmrn figther

Curiously, when starting to work on installations, I have come across one station, the Diplomatic Center (by Strangelet), that has been sitting in the data set probably for years already but never has been spawned. There are also three older contributions by Oblivion (who is now involved in Angels Fall First game development) that have been added:
* Uln Asteroid Refinery
* Uln Refinery
* Uln Commerce Center

Diplomatic Center

Uln Commerce Center

Fendorin was active constructing bases in Rlaan space, but not only. The new stations of this craftsmanship that a space faring traveler may encounter:
* Civilian Asteroid Shipyard
* Rlaan Commerce Center
* Rlaan Fighter Barracks
* Rlaan Medical
* Rlaan Mining Base
* Rlaan Star Fortress
* Shaper Bio-Adaptation Station

Rlaan Commerce Center

Rlaan Fighter Barracks

Civilian Asteroid Shipyard

Rlaan Star Fortress

Including new units into the game is not the only work that was done. We have upgraded most of the vessels HUD images to higher resolution and better quality and were careful to outfit the new additions with engine thruster exhausts, turrets, and blink lights.

Mechanist Built Mk32 Battle Cruiser with H496 shuttle flyby.

We will continue contributing (as we’ve still got some uncommitted models up the sleeves) and hope that you enjoy the work we have been putting into making the Vega Strike universe a richer experience.


August, or things like it

Posted by jackS on July 29th, 2008

While August may be synonymous with “vacation” in many parts of the world, in my particular academic discipline, August is currently one of a number of months synonymous with “conference submission deadlines” (my field of computer architecture being one in which conferences, rather than journals, are the primary paper targets).

As some of you may have noticed, I’ve been (mostly) incommunicado for the last few weeks, and I’ll continue to be out for at least the next couple of weeks (until after the HPCA deadline). However, if anyone does need to reach me, I’m still checking my VS-related e-mail, just not keeping up on forums at the moment.

Graduate school has been particularly demanding of my time in 2008 (thesis proposal and related prep taking up much of the first quarter of the year, and then much of the next several months trying to make up time lost working on the proposal rather than paper-oriented research :-P), but, with any luck, I’ll both have some new publications to show for it in the not too horribly distant future, and some breathing room in which to commit larger time blocks to VS. With luck or without, there are no deadlines in September, so I’ll at least be much more involved than in August :).

For those of you who get to take some time off - enjoy yourselves :)

For those of you poor fellows who’re  slaving away at conference deadlines - if you hurry, you can still make ASPLOS instead of HPCA ;-).

Extensions Renaming

Posted by pyramid on June 12th, 2008

Recently, there were a lot of questions on why we are renaming extensions for image files and textures. The changes were started after the 0.5.0 release and can be found in SVN only at this point in time. They will be in released versions from 0.5.1 on.

In 0.5.0 we had png, jpg, and bmp and the extensions did not necessarily represent the codec used in the file. Having files with .bmp extensions that in fact were .jpg, or .png, was completely messy. Also, with 0.5.0 we have switched all in-game graphics (images and textures) to use one of the S3TC DDS compression algorithms (DXT1, DXT3, or DXT5. See info on compression in http://en.wikipedia.org/wiki/S3_Texture_Compression). This means that the exiting extensions became misleading and obsolete.

The S3TC (DDS) compression is currently the most widely used and has the best tradeoff between memory/bandwidth consumption and graphics quality for most of the cases as compared to other compression algorithms like Dithering, 3Dc, or FXT1 (see comparison in http://www.digit-life.com/articles/reviews3tcfxt1/) and is being supported by all GPU manufacturers and the two most prominent graphics APIs, DirectX and OpenGL. This still does not mean that it will remain that way in the next 2-5 years (for example see the specialized normal map compression algorithm in http://ieeexplore.ieee.org/iel5/4089190/4089191/04089271.pdf).

The above issues (compression and naming mess) are the two main reasons that have lead us to the conclusion that renaming graphics extensions is of advantage giving us at the same time clarity about the contents of the image or texture and preparing the ground for any future codec changes that might come on the way. With this changes we are bringing our support for texture compression up to state-of-the-art.

The renaming is well underway with only small portions of data still open up for conversion. The new extensions come in 2 flavors:

  • .texture contains, besides the base texture, mipmap levels and is used everywhere where objects are drawn at different distances, i.e. in space. The GPU will automatically switch the mipmap level according to the object’s distance from the viewer. There is no reason to draw 1024×1024 pixels when a ship is 15km away and only visible as one or two pixels on the screen. Textures can come with transparency (e.g. sun flares) or without (ship diffuse textures).
  • .image files are graphics without the mipmaps and are used for all graphics where the distance to camera does not change. Space backgrounds, cockpit gauges and HUD images are the places where we apply the image type in space. Mipmap-less images are also used for splash screens, cargo/weapons/upgrades, base and planet backgrounds, to name some of the used places. They equally can come in transparent or opaque flavors.

There is no association whatsoever with specific directories where the files reside. For example, under the animations folder we will find subdirectories with .image types (for comm animations or splash screens) and other subdirs with .texture types for example for explosions or blinking lights.

The idea was that .image and .texture aren’t misleading, there are no existing file formats with that extension. And, since it’s format-agnostic as well, it will also make things easier to maintain in the future since a change in format (say, make them png) needs no change to the resources referencing them (like sprite files, animation files, meshes, or system definition files). And this is the point that makes the rename really necessary, since especially mods have asked for support of higher-quality textures by replacing the dds textures with lossless png textures, or high-quality jpegs. If we said “let’s rename them to .dds”, then the mod would have to have a dds file that was a png. So the only real choice is the format-agnostic choice. Since, over time, the desired codec used for images has changed and will be changing with the available technology, we decided that a codec agnostic extension was the best course of action to stop the constant confusion that people have been experiencing when the filename’s extension doesn’t match the real codec used.

The renaming might be annoying for those who have to re-download all graphics again. Though we were very careful to use the svn move command for renaming, it appears that svn doesn’t want to behave correctly in all the cases and, instead of moving also the files in your working copy, makes a completely new download. But this one renaming and re-download is to stop any need to change the names ever again whenever we change the underlying data type and to stop user confusion over conflicting info. It should have been done long ago, but has always been put aside until today.

The recent folder structure was also changed on they was and is now as follows:

  • data holds the dds compressed images and textures
  • masters holds the source and project files used to create the art and master images with the correct codec-dependent extensions.
  • hqtextures holds the non-dds compressed high quality png or jpeg files (though with the same extension as in data).

This means that the data folder isn’t intended for artists at all. It’s only VS UTCS game data. You have the masters repository where things have proper extensions and are artist-friendly, but data needs not be so. This means that you shouldn’t be editing data images but always edit/generate master images and then export to data. hqtextures are optional if you want to see in-game high quality graphics. It will not be compressed on loading into the GPU, so beware, the memory consumption will increase. You have been warned.

With the renaming comes the question: how can I view the images and textures without resorting to a complete masters repository download? It is possible, however your viewer will be required to understand the dds format. For GIMP (Win/Posix) and IrfanView (Win) there are plug-ins available that can read dds files (see http://vegastrike.sourceforge.net/wiki/Links:Graphic_Applications#2D_Graphic_Converters for a list). Several viewers come with native dds support (e.g. KDE supports DDS hence also GwenView, KView, and Konqueror). If in trouble, try renaming the graphics to .dds or .png and see if your application can read it.

Actually, all software should rely on file magic mime typing to determine the codec type of the file (where such a distinction can be made) and not extensions. Extensions are for humans. In theory, no application worth using should give any problem about .image or .texture file loading if it has the appropriate decoder installed. Unfortunately, the reality can be different and a bit more annoying. In Windows, there are several ways of associating extensions with specific applications (e.g. Open With../Always use this program…), which should ease the desktop integration of the new extensions. In Linux you can do a similar operation (Open With…, select the application e.g. GwenView or KView, and check “Remember application association…”) and will be able to view the images directly from Konqueror. While there might be other ways under Linux which I haven’t yet explored, Mac remains black magic to me.

Hopefully this clarifies things up a bit.