Jad Engine Blog News Feed 
Monday, June 11, 2012  |  From Jad Engine Blog

In my last post, there were some comments from Olmo about breaking changes. Olmo is one of the best .NET developers I know, and he offered some very interesting ideas on how to handle them (for example, leveraging Roslyn to fix breaking changes).

Our position about breaking changes is that they are bad, very bad. We don’t want them, and we will try to avoid them as much as possible. We think it is a pain to work in a project and “fear” updating it because every time you do you have to fix your code to make it work again.

But, the reality is that breaking changes exist, and they will happen no matter how hard we try to avoid them. While we have used Wave in-house quite a lot, we are so used to it that we can miss issues that would be glaring for someone starting with it without prior knowledge. For example we have had a new person using it for building demos, and we got very interesting feedback from him. I am sure that when we release this situation will repeat itself a lot.

So, our idea so far in this front is to do something similar to how the XNA team performed their releases: try to avoid breaking changes in minor releases unless it’s a very important issue, but there will be major releases where we will put in all the minor changes we did not add to avoid breaking things. Not every major release will have to be like that, sometimes a release will just add support to a new platform, add new capabilities to the engine because of platforms evolution,… But we will do our best to not break things on minor updates so people can work on their projects confident that they will not have to lose time because of an upgrade.


Breaking changes was posted the 06/11/2012 at Kartones.Net.

Monday, May 14, 2012  |  From Jad Engine Blog

When you design a framework/engine/library, there is always the fact that you have to think about the “surface area” of the API: how many classes, methods,… the user will see and be able to use.

In Wave, we have divided the engine in several assemblies:

  • WaveEngine.Common: this assembly contains the most basic functionality of the engine. For example, the math library sits here. It also contains a set of core interfaces that represent the engine in its most barebones state: IAdapter, IGraphicDevice, IIOManager, IInput,…
  • WaveEngine.Adapter: this is not really an assembly, but a set of assemblies with the same name, one for each platform we support. An adapter is simply the code that is needed for the engine to work on a given platform. These Adapters implement the core interfaces defined in WaveEngine.Common (IAdapter,…).
  • WaveEngine.Framework: this is the assembly the users of Wave Engine will be using to code. Framework wraps over the low level operations defined in the core interfaces, and presents the end user with a higher level API that is more powerful and simpler to use.
  • There are a few other assemblies, but they are not more extra sugar candy and not mandatory really (WaveEngine.Components, WaveEngine.Materials,…).

So, in a perfect world, when a user starts creating a project with Wave Engine, he will just reference Framework and start working happily. In reality, that person will probably also need to reference Common, as some types that sit there (specially the math types) are commonly needed in a normal project.

Once you add a reference to Common, there is a problem. All the core interfaces (and a few other things) are public, but they are not designed to be used when making a game, they are designed to be used when writting an adapter to support a platform on where to run the engine. But the user will see them nevertheless, as they are public because they have to be consumed for the Adapter assemblies.

This is somewhat problematic. If by mistake we expose one of those levels interfaces in Framework (for example we used to have quite a few references to IAdapter), the user can bypass the high level API (by mistake or by his own choice) and then start using things that are not designed to be used in his scenario (making a game). A solution would be to split the Common assembly, one with really basic types, and another with the types used in the adapters, but the problem is that those assemblies would be quite small if separated.

In the end, we have made Framework so none of this classes and interfaces are exposed publicly anywhere, so the user doesn’t have a way to get a reference to an object implementing them (they are always interfaces or abstract classes). The user could implement them himself, but that doesn’t make much sense, so we do not worry about that case. The user could also get them using reflection, but we do not think that’s an interesting scenario to take into account either.

So, the main place where we should spend more time (and where we have spent more time by far) making sure the public API was solid is WaveEngine.Framework. Take into account that Wave Engine was developed at the same time as our first game, ByeByeBrain (BBB), was developed, so both evolved quite a lot during their main development (which lasted more or less a year). It was an interesting thing for me to see how the olders part of the code where for example accessing Common, while the newer ones were using Framework, it shows how the engine evolved, providing only very basic services at the start, and then growing to provide more higher level and powerful classes to simplify daily scenarios, or handle some platform differences that even with Common couldn’t be abstracted. For example serialization works in some minor details differently in .NET (and different between the PC and the Xbox360) and Monotouch/Monodroid.

As we dediced to review Framework, we went through all public attributes, properties and methods and start deciding what to do with every one of them. Some things were public, but were not used outside Framework (maybe it was thought they would be useful but they were not in the end), and were made internal. Others were public and where used, but the same could be achieved in the same way and we did not want to expose two ways of doing the same thing. And so on and on. In general, we tried to make as much as possible private/protected/internal, yet it is hard sometimes to gauge if you have done it right as we have only BBB as a case study (and some internal samples, but they are fairly simple). Did we make something internal because it was not useful or it was just not useful in BBB? That was a very important question for us, and sometimes we did not have a clear idea of what was the answer, so in those cases we decided to go with hiding things, as making something public has less chances of breaking things than making something private.

We think in the end we have ended with a very clean API, with enough power to be usable but not overly complex, but we need to validate it ourselves first. Right now the current Wave Engine build and BBB sit in a Stable branch, while this new rewrite sits in a Development branch, which does not compile BBB yet. We have now some work to migrate BBB to this new branch and see how it performs in a “real scenario”. BBB is a very graphic intensive game and during its original fast development there were some design decissions that are lost today. For example, did we expose a low level operation from the IAdapter because using the high level version was too slow for a common scenario? Or more importantly: have we introduced some subtle bug without realizing? Those are the questions we will be able to answer when we port BBB and validate our work. We have ported the samples and they work nicely, but switching BBB while we finished its release in iOS and Android was simply not feasible for a small team like us. Now that the iOS has been released and the Android version is coming to an end, we can start with this final validation and fix any pending issues we find.


What should be public in an API? was posted the 05/14/2012 at Kartones.Net.

Wednesday, April 11, 2012  |  From Jad Engine Blog

My current job in Weekend Game Studio is to review the codebase of Wave Engine. We are preparing for a public release, and we want to try to make sure the engine API is as good as we are able to. Even if I have just been playing with it for less than two months, I am the one in charge of the review for one reason: in general the more you work in something, the less you see its problems (this applies to many other things, not just coding).

Of course, given that I have used the engine very little, sometimes I give feedback that simply shows my ignorance of the product, but when the engine team has to explain me why they made a certain decision, they also force themselves to think about it, which helps us a lot in the long run.

On Monday, one of the last things I saw in the code, is that the class Entity was sealed. I spent part of the evening at home remembering a topic that is somewhat controversial when writing a library: whether classes in the library should be sealed or not.

I used to be pro-sealing everything, but my views have changed with time. I think I started to change my opinion after talking in one MVP Summit with Michael Cummings about the subject. Michael has been maintaining the engine Axiom for years, and he was very in favor of not sealing classes (if I remember right :) ). I have also hear this complaint from time to time on forums or blogs when using some libraries (for example XNA).

So while I now think it is better not to seal unless you have a very good reason, I am not totally convinced about it. On Tuesday I decided to put a tweet about the subject, and I got a very interesting conversation with Rodrigo Corral, Jorge Serrano, and Enrique Amoedo.

First thing I usually hit when I have a design doubt is Microsoft Design Guidelines for Developing Class Libraries. In the Design for Extensibility section there are two topics about unsealed and sealed classes. They are pretty short to read, and they seem to be very in favor of not sealing.

Consider unsealed classes with no virtual or protected members as a great way to provide inexpensive, yet much appreciated, extensibility to a framework. By default, most classes should not be sealed.

And:

Do not seal classes without having a good reason to do so. Do not assume that because you do not see a scenario in which extending a class would be desirable, that it is appropriate to seal the class.

There are also quite a few interesting posts about the subject around the internet, most with very long arguments about the topic. This is one of them, which even includes a comment about the subject from Eric Lippert.

The biggest argument usually for sealing, is that if something was not designed (and tested) for extensibility, it should be sealed. Allowing inheritance could break the class, or other classes that depend on that class in ways that are hard to predict, and the cost of maintaining and testing something unsealed is much bigger. Sealing makes the live of the library developer easier (classes cost lest effort), and avoids the user shooting himself on the foot by extending something that was not designed to it (or that the user didn’t understand very well before extending).

The argument for not sealing is that library developers can not imagine every possible use their users may give to a given class, so sealing is forbidding scenarios that may be interesting for the users of such a library. If you have a library that does not allow the users to do what they want, you have unhappy users which is a problem.

In my experience, I have hear very few times users complain because how they shoot themselves on the foot by extending something that shouldn’t be extended. I like the idea of sealing as a way of warning a user that “you should inherit from here at your own risk”, but the implementation of sealed is too restrictive. On the other hand, I have hear quite a few times users complain about not been able to inherit from something sealed.

The biggest problem when sealing is when the sealed class appears as a parameter of a method. In that case, there is no way to pass an specialization of that class to the method, so users need to do pretty strange workarounds. Library developers have a way to avoid this problem, and that is that if the class is sealed, you do not pass it as a parameter, and instead you pass an interface and make your sealed class implement that interface. That is somewhat better, but it seems too clunky on my eyes:

- First, you have created an interface just for the sake of creating an interface. It was not because it made sense, it was just because you needed a workaround.

- Second, you have added more weight to your API and library: the interface, and probably at least one public implementation of it.

- Third, interfaces version very badly, as any change on them will mean a breaking change on every implementation out there. Your users upgrade to the last version of your library, and suddenly nothing compiles. Users hate that a lot in my experience, and I have seen the horror of versioning interfaces (in ArcGIS API) and that is even worse.

The only point to which I can agree is that sealing a class makes things easier for me, the library developer. But given that my final objective is making the live of my users easier and not mine (unless the cost is horrible), I prefer going with not sealing most of the time. On the other side, the only thing that keeps me from being 100% sure about that decision is that unsealing a class is not a breaking change, so depending on how fast we were able to service new versions of Wave Engine, it could become a non-issue (and only unseal when someone finds a case where it would be needed).


Monday, April 02, 2012  |  From Jad Engine Blog

This year Weekend Game Studio (the gaming part of Plain Concepts, the company where I work now), decided to have a booth and give a sponsored talk in GDC 2012. Even if I had joined the company just a month earlier, I was lucky enough to get to go also to GDC (and be one of the two speakers of the conference, which was a little stressful to say the least :p).

The idea of the GDC was to showcase our two first games, the tower defense ByeByeBrain and the web game The Cure. We also wanted to talk about our own internal game engine, Wave Engine, and show how the same code could run in WP7, iOS, Android, web, and PC.

So this is a postmortem from my point of view of things that went right and things that went wrong.

What went right

Weekend Game Studio, and the Seattle office of Plain Concepts have just born a little more than a year ago, and they are in the process of getting firmly established. So going to GDC, giving a sponsored talk,… was a huge expense for us. We also decided to go to GDC pretty late, so in the end, we could only get a booth on one of the corners of the expo pavilion. We also could not get huge plasma TVs, have a ton of goodies to give away,... as our budget was pretty limited.

In general, it looked like our booth was going to be pretty “gray/boring”, as we didn’t have the money to just make it stand out. So we decided to go other route. The artist of ByeByeBrain had the idea of making the booth seem like a zombie refuge, and have us disguised as zombies. Even if we weren’t very sure at the start of the idea, it was a huge success. Everyone noticed us immediately while walking around, and a lot of people stopped to talk just to say that they loved the booth, which in turn allowed us to just ask about them and keep the conversation going.

I am sure that without the thematic booth we would have had quite a few visitors (the expo floor was very busy specially the first day), but the great decoration, the disguises,… made GDC much more of a success and allowed us to meet much more people (from students to very high profile execs).

Another thing that went right, and it may seem a little obvious, was the decision of going to GDC. We were not sure if we would be able to recoup the costs of it, but in the end I think it was more than justified. We had business meetings for projects all day long, from very small projects, to really big opportunities.

We also got a lot of portfolios from professionals searching for a company to join or for freelancing work. Right now Plain Concepts has a lot of programmers and web designers, but we lack severely in other areas important for games, like 3D artists, game designers, composers, sound engineers,… We have built thanks to GDC a nice pool of people we can contact in the future for work and collaborations, and it is way nicer to have met and talked with that person face to face than just by email.

And we also got a lot of portfolios from students interested in internships, which took us totally by surprise. I had given classes in university before, and while it had its ups and downs, it was something I enjoyed a lot. I think having interns working with us, and the experience of guiding them and teaching them would be quite interesting.

What went wrong

Sadly, not everything went great, there are a few things that we will take into account for next editions to avoid doing the same mistakes.

First, we had a problem with our booth, as the booth we received was not the booth we were expecting (we were missing some lateral walls where we were going to hang part of our decoration). The GDC organization was very unhelpful, as they said it was a mistake on our part for misunderstanding the booths and their emails (which honestly could have been written much clearly, and we asked very specific questions in most of them). Even after talking with them for a while they basically told us that there was nothing they could do. But hopefully, the company building the booths itself, was able to help us (for a price, everything has a price down there :p), and we were able to have some supports to hang our booth decoration. Even if we had to pay, the attitude of the company building the booths was very helpful and friendly, so it is appreciated.

Our second mistake was not been really prepared for GDC. It was our first time and we didn’t know what to expect. First day we came to the booth with not many business cards, and we had run out of them in less than two hours :S The booth was full of people most of the time (in general, the first day the whole expo was full of people), and we were a little overwhelmed by how many questions and interests we got that day. For example, we started getting business cards, and the first day we failed to take notes on most of them (if it was someone searching for job, freelancer, a future partner, a distributor,…). On the next days we carried many more cards and we took notes of everyone, which helped a lot to organize things.

Another example of how badly prepared we were was when we got students asking for internships in Weekend. We are a company founded by Spanish people (and pretty young in the states), so we had no idea of what paperwork (if any) was needed to have an intern working with us. Now we think it is funny that we had not even thought of that before going to GDC, but we could have never imagined that students would be interested in doing an internship with us.

I also think that something that went “wrong”, although it’s much more subjective, is that we could not leave the stand and check the rest of the GDC. We were only three people on the stand, and David was quite often out on meetings, so Anton and I had to stay pretty much all the time there chatting with people and showcasing ByeByeBrain, the Cure, and Wave Engine. I would have liked though to see some of the conferences, and the other pavilions.

What I have no idea how it went

And, even after all of that, there is one thing I have no idea if went right or not, and that was my sponsored talk about Wave Engine. One one side, we had forty people there, that was much more than we were expecting (and more given how many things were happening at the same time we gave the talk). But on the other side I had no questions in the end (publicly, I had a few of them privately), and I can’t stop thinking that probably people were expecting a much more technical talk. I think most talks in GDC are very heavy on the technical side as the general public of GDC has a high degree of knowledge, but ours wasn’t very deep. I put some basic code examples that explained the engine philosophy, but I didn’t enter in all the gory details of making the adapters for each platform and so on.

Nevertheless it seems the GDC organization sends the evaluations to the speakers, so I really want to read them and see what things can be improved if I repeat the experience. Getting evaluations after a talk can be harsh sometimes, but they are great tools for improving. I remember the evaluation of my first talk ever (a talk for Microsoft University Tour in Spain), that was a total disaster, but it contained lots of very useful comments, so it was a welcomed thing.

Summary

So that’s all more or less for GDC 2012. After the Expo when we returned to Seattle we spend the next week organizing all the business cards, sending emails, checking portfolios, samples, and demo reels,… It was an interesting week, and I have to say that I saw lots of impressive portfolios out there.

Btw, I would like to add that we were lucky to have the people of Trioviz as one of our neighbors, they were super fun people, and their 3D technology was simply awesome. I tried Arkham City for a while and the 3D effect was truly nice. And they gave us some 3D Glasses :) We had also some very nice neighbors just in front of us, that offered us every day tea and other refreshments, but I sadly don’t remember their name (and I can’t seem to find the 2012 GDC Expo floor map).

I really hope we repeat next year, it was a great personal experience, and we did a lot of business so it was also worth for the company (I guess, I don’t know how much did it cost us exactly to go there, but I think the visibility and contacts we gained does offset the costs).


GDC 2012 Postmortem was posted the 04/02/2012 at Kartones.Net.

Saturday, January 21, 2012  |  From Jad Engine Blog

So, yesterday was my last day at C Tech Development Corporation. It was my last day as a freelancer too. These last three years have been a great learning experience, and I feel I have grown quite a lot as a developer.

I landed the job thanks to my work in the old Jad Engine, where I met Reed and Bengt, and I honestly think I have been very lucky to be working with them, along Adam and Devlin. They are very talented devs, and I have learned from each of them as much as I have been able. I am proud also of the product we have been build during this time: EnterVol, an ArcGIS plugin that adds volumetric analysis of chemistry/geology data. Internally EnterVol is a really interesting piece of software, using WPF, WCF, TPL,... It is rather complicated, but also full of very elegant design decisions and code. I really hope it brings lots of new customers to C Tech, they deserve it :)

I want to give a special mention to Reed, my boss. Reed Copsey is a Microsoft MVP in C#, maybe not as known as other famous people on that area, but he is one of the best (you can check Stack Overflow and see him in the first page of top users ;) Not only he is an outstanding developer, capable of facing any problem you throw at him (seriously, he can jump from DB, to graphics, to algorithms, to WCF, and not even blink), he has been also an outstanding boss. He has lead the team carefully, dividing the job, keeping an eye on us, helping when needed, and he has been more than understanding with some cultural differences that exist between Spain and USA, specially regarding holidays. Given how hard is to find a good boss, I really appreciate this.

And for the future? Well, I am moving to Plain Concepts, a Spanish Microsoft partner. I am very happy of this for several reasons. First and mostly, because of the people: I know quite a lot of people in the company from old jobs, Microsoft events,… It is always nice to work with people you know you already get along well, and even nicer when they are technical leaders in their fields. I hope I’ll learn quite a few new things from all of them. Oh, and the company average age is quite young too :)

Second, because Plain Concepts has just started developing games, via Weekend Game Studio. They have already released two games and are developing their own multiplatform game engine called Wave (which powers Bye Bye Brain). Wave is mostly the work of two people, one if them is Javier Canton, another fellow Spanish XNA/DX MVP. I really look forward working with them on the engine and improving it in the future. And releasing quite a few games :)

And third, because I’ll be moving to their Seattle office (if I get the visa, come on immigration), which is a huge change for me. I needed to get out of my comfort zone a little, and I wanted a change after been working so long remotely at home, and this achieves it by long. It also allows me to experience what it means to live abroad for a long time, something I’m very curious to discover.

So, one path ends, and another starts, I am really eager to discover how this new part of my life and professional career will develop.


Changing jobs was posted the 01/21/2012 at Kartones.Net.

Saturday, January 14, 2012  |  From Jad Engine Blog

Lately in my free time I have been developing a small LightSwitch application to manage roleplaying games of Birthright. Birthright is like Dungeons & Dragons meets Civilization, a mix of RPG and grand strategy. The problem is that the strategy part has so much accounting that usually you need a computer to handle it (specially if you have 50 or more players, which is common in play by email games).

I have always had the idea of doing a tool for Birthright, but doing it the “traditional way” (ASP.NET or WPF/MVVM, WCF, EF) was too time consuming. Then I discovered LightSwitch and I was amazed how fast it was for simple CRUD applications. And it also allowed me to do a desktop tool or a web based tool (very interesting as lots of games are played using forums).

I was all happy, developing my tool, testing it internally, and then one day, I hit the point where I could show it to other people.

And nightmares began: allowing other (non-dev) people to test the tool was a total mess. LightSwitch is not really thought for this scenario (my bad for not checking it earlier), and it assumes for a desktop tool that the person installing has access to the target machine (you need to install and configure a SQL Server…).

I went to the MSDN forums for help, but I found very little help : make your own installer. Great, I had already seen that one coming :p

So, I sit down to make my own installer. First, I run a Virtual Machine in another computer and I try to install the tool by hand, to see the steps involved on this. I spend nearly a whole day fighting with this, partly because LightSwitch install instructions are pretty useless, and partly because I knew very little about deploying SQL Server. But I manage to figure it out.

Then, I start writing the installer. Writing installers is boring, hard, and in general a totally forgettable experience, but we are lucky there are tools to help for this, and the best one in my opinion is Advanced Installer by Caphyon (disclaimer: I have a NFR license for it because of my Microsoft MVP).

The following steps are what was needed to deploy my LightSwitch application using Advanced Installer, although the idea is the same no matter what tool you use.

Installing Prerequisistes

First, you need your installer to install some prerequisites. I was doing my tests on a Windows 7 x64 VM, so my list was:

  • .NET 4.0 Framework. I have set it to download in case it is not present on the target machine. The url to download is:

http://download.microsoft.com/download/1/B/E/1BE39E79-7E39-46A3-96FF-047F95396215/dotNetFx40_Full_setup.exe

And the registry key to check is:

HKLM\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\Install (value = 1)

  • SQL Server 2008 R2 Express. The database used by the application. I decided to bundle it in the application as I was assuming most normal users wouldn’t have it installed already. I chose 2008 R2 because it has a 10GB limit instead of the 4GB limit of the earlier Express editions. At this point, I decide to do two installers, one for x86 and another for x64, so I get the correct versions and bundle them. The key to check, according to the MSDN forums is:

HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\SQLEXPRESS\MSSQLServer\CurrentVersion\CurrentVersion (value 10.50)

Although I’m not very sure about this, registry keys and SQL Server are a total mess :(

  • Silverlight 5. This one is strange, it’s not marked as a requisite by LightSwitch itself, but LightSwitch applications are Silverlight applications so they can’t run without it. There are x86 and x64 versions too, and I decide to bundle them in the installer (as I can’t find a good direct download link). And the registry key is:

HKLM\SOFTWARE\Microsoft\Silverlight\Version (value 5.0)

This look fine at first glance (it did to me!), but there are two (big) problems here.

  • Silverlight 5 x64 does not work on Windows Vista x64, you need to install Silverlight 5 x86. Color me surprised about this.
  • SQL Server 2008 R2 Express requires Windows Installer 4.5 which in my tests is installed into Windows Vista and Windows 7, but not in Windows XP (and probably not in WS 2003). The package to install is different in Windows XP x86 and Windows XP x64.

So, I decide that I will go with only one installer and that I will install Silverlight x86, SQL Server x86, and I’ll bundle also Windows Installer 4.5 for XP x86 (sorry XP x64, no love for you).

SQL Server 2008 R2 Express and Silent installs

The SQL Server 2008 R2 Express requires a few careful decisions. In this case, it requires using the default name instance (SQLExpress), and Mixed Mode authentication. It’s easy to choose that in the screens, but I was aiming to make something simple as my end users are not technical people, so this was a problem. It was also a problem that the requisites would pop up windows left and right asking the user for his input.

So I decided to install all of them silently. I used the following switches:

  • For Windows Installer 4.5: /q
  • For .NET Framework 4.0: /passive
  • For Silverlight 5: /q
  • For SQL Server: this one is nastier, as I want it to make also the decisions about the instance name, the authentication,… The command is:

/QS /ACTION="Install" /IACCEPTSQLSERVERLICENSETERMS=true /FEATURES=SQLENGINE /INSTANCENAME=SQLEXPRESS /SECURITYMODE=SQL /SAPWD=password /ADDCURRENTUSERASSQLADMIN=true /SQLSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE"

Copying Files

This part is the simplest one, the installer only needs to copy some of the files generated by the Publish action of LightSwitch into the target machine. Just copy the setup.exe, YourApplication.application, and the whole “Application Files” folder.

You can also create a shortcut to the .application file in the desktop (LightSwitch will only create in the Windows start menu).

Final Configuration

Finally, the application needs to do a few things.

  • First, create the database. When you publish in LightSwitch it generates a SQL script for you to generate the DB, just run it with something like this in your installer:

"[ProgramFilesFolder]\Microsoft SQL Server\100\Tools\Binn\SQLCMD.EXE" -i "[#database.sql]" -S .\SQLExpress

This script gives some warnings in the end, but as far as I know, it doesn’t matter.

  • Create the database administrator. I use forms authentication for my app, so I need to generate a SQL administrator for my created database so the app can connect to it. Luckily, LightSwitch generates also this .sql script for you. Unluckily, the script is wrong. Your script will look something like this:
:setvar DatabaseName "MyDatabase"


:setvar DatabaseUserName "administrator"


:setvar DatabaseUserPassword "password"


GO


 


USE [$(DatabaseName)]


 


DECLARE @usercount int


SELECT @usercount=COUNT(name) FROM sys.database_principals WHERE name = '$(DatabaseUserName)'


IF @usercount = 0


    CREATE USER $(DatabaseUserName) FOR LOGIN $(DatabaseUserName)


GO


 


EXEC sp_addrolemember db_datareader, $(DatabaseUserName)


EXEC sp_addrolemember db_datawriter, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Membership_FullAccess, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Roles_FullAccess, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Profile_FullAccess, $(DatabaseUserName)


GO


 



But that will not work, as there is no SQL login associated to “administrator”. The correct script is:




:setvar DatabaseName "MyDatabase"


:setvar DatabaseUserName "administrator"


:setvar DatabaseUserPassword "password"


GO


 


CREATE LOGIN $(DatabaseUserName) WITH PASSWORD = '$(DatabaseUserPassword)'


,DEFAULT_DATABASE = [$(DatabaseName)]


GO


 


USE [$(DatabaseName)]


 


DECLARE @usercount int


SELECT @usercount=COUNT(name) FROM sys.database_principals WHERE name = '$(DatabaseUserName)'


IF @usercount = 0


    CREATE USER $(DatabaseUserName) FOR LOGIN $(DatabaseUserName)


GO


 


EXEC sp_addrolemember db_datareader, $(DatabaseUserName)


EXEC sp_addrolemember db_datawriter, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Membership_FullAccess, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Roles_FullAccess, $(DatabaseUserName)


EXEC sp_addrolemember aspnet_Profile_FullAccess, $(DatabaseUserName)


GO



You can run this script with the following line within the installer:


"[ProgramFilesFolder]\Microsoft SQL Server\100\Tools\Binn\SQLCMD.EXE" -i "[#CreateUser.sql]" -S .\SQLExpress


  • Create the application administrator: you have to create an initial user that can connect to the application and has security administration rights so he can do the initial configuration. The command line for this is:

Microsoft.LightSwitch.SecurityAdmin.exe /createadmin /user:Administrator /password:lalala_1 /fullusername:Admin /config:"../web.config"



So the installer also needs to copy the Microsoft.LightSwith.SecurityAdmin.exe file. Which needs a ton of DLLs to run. But luckily, they are all inside the “Application Files/Bin” folder which got copied by the installer earlier. So I just copy this file too and run the command from there (also, that’s why the path to the web.config is ../web.config, as it sits in the “Application Files” folder).



  • Run setup.exe. This is the ClickOnce installer generated by LightSwitch when you click publish, run it and the app will finish the installation and run.

After all of that, I got an installer that would get my application into the users target machine with minimum issues for them. And testing shows it works quite well.



Well, except one strange issue: if the application is installed in a path with the char ‘&’ the application will fail to launch. The issue was reported in 2008 (http://blogs.msdn.com/b/gauravb/archive/2008/12/02/clickonce-application-does-not-install-when-the-deployment-path-includes-ampersands.aspx) and I suppose it was deemed not important enough to fix it, with it’s more or less right, except if you are developing an application for ‘Dungeons & Dragons’ like I am…


Thursday, November 17, 2011  |  From Jad Engine Blog

Recently in one of my pet projects I needed to solve the following problem: build a lambda using an expression tree from a string that represents an access to a chain of properties. For example from this string:

“MyClass.Property1.Property2.Property3”

To:

f => f.Property1.Property2.Property3

This problem had two other conditions:

  • The last property always returns a double.
  • I have a list of strings to transform, and I want to save them in a list of lambdas, but each string has a different starting class, and those classes don’t have any common root (other than object).

So to fulfill those conditions my idea was to save the generated lambdas in a List<Func<dynamic, double>>. With all of this, I set to build a method to generate the expression tree and compile it to a lambda, I started with something like this (no error checking in the examples).

public Func<dynamic, double> Parse(string expression)


{


    string[] split = expression.Split(new string[1] { "." }, StringSplitOptions.RemoveEmptyEntries);


 


    string target = split[0];


    IEnumerable<string> properties = split.Skip(1);


 


    ParameterExpression param = Expression.Parameter(Type.GetType("ConsoleApplication1." + target));


 


    Expression exp = param;


    foreach (var prop in properties)


    {


        exp = Expression.Property(exp, prop);


    }


 


    var lambda = Expression.Lambda<Func<dynamic, double>>(exp, param);


    return lambda.Compile();


}



Pretty easy stuff, I just chain calls to Expression.Property to generate the property get access. But this fails at runtime in the Expression.Lambda line with:



ArgumentException: ParameterExpression of type 'ConsoleApplication1.DomainProvince' cannot be used for delegate parameter of type 'System.Object'



I tried to solve it in different ways, but there is no way to make this approach work. So, I started searching a little on internet about this and I found this question in Stack Overflow:



http://stackoverflow.com/questions/2046480/net-4-0-how-to-create-an-expressionfuncdynamic-dynamic-or-is-it-a-bug



The important part of that question is that Eric Lippert says the following:



The bit that is not legal is the execution of a dynamic operation inside a lambda that is being converted to an expression tree type.



Uh? That’s not legal in .NET? I did a quick example:




Func<dynamic, double> f = d => d.Property1.Property2.Property3;


Expression<Func<dynamic, double>> e = d => d.Property1.Property2.Property3;



First line compiles, but the second one gives the following compile time error:



An expression tree may not contain a dynamic operation



Damm, life sucks :S But I supposed I was not first person in the world to try to do something like this, so I searched a little more and I found another post in Stack Overflow:



http://stackoverflow.com/questions/3562088/c-sharp-4-dynamic-in-expression-trees



Where they create an Expression tree with a dynamic operation in the body using Expression.Dynamic and it works! So I reworked my original code for this:




public Func<dynamic, double> Parse2(string expression)


{


    string[] split = expression.Split(new string[1] { "." }, StringSplitOptions.RemoveEmptyEntries);


    IEnumerable<string> properties = split.Skip(1);


 


    CallSiteBinder binder;


    ParameterExpression param = Expression.Parameter(typeof(object));


 


    Expression exp = param;


    foreach (var prop in properties)


    {


        binder = Binder.GetMember(


        CSharpBinderFlags.None,


        prop,


        typeof(object),


        new CSharpArgumentInfo[] { 


            CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null), 


            CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null)});


 


        exp = Expression.Dynamic(binder, typeof(object), exp);


    }


 


    Expression body = Expression.Convert(exp, typeof(double));


 


    var lambda = Expression.Lambda<Func<object, double>>(body, param);


    return lambda.Compile();


}




Even if the code is a little uglier, the idea is the same: do a loop over all the properties names chaining calls. But this time we use a dynamic operation using Binder.GetMember and Expression.Dynamic. The only thing at the end is to remember to cast the result to double as everything dynamic is treated as an object internally. Something that can be seen easily given that I create an Expression<Func<object, double>> but I return a Func<dynamic, double>.

Wednesday, June 22, 2011  |  From Jad Engine Blog

First, Kartones remembered me that if anyone wants to try to solve the problems of the Tuenti Programming Contest, they can be accessed in the following address:

https://contest.tuenti.net/?m=Questions

And second, yesterday while talking with the great Python hacker Javier Santana I remember that .NET 4.0 added a new namespace and struct: System.Numerics.BigInteger. This struct represents an arbitrarily big integer, so if we use it we will have no overflow problems nor precision ones (the only thing would be to run out of memory to represent the number, but it’s hard to fight against that).

Using BigInteger, the first solution would be nearly perfect:

public static void Main(string[] args)


{


    char[] chars = new char[1] { ' ' };


 


    while (Console.In.Peek() != -1)


    {


        Console.Out.WriteLine(


            Console.In.ReadLine()


                .Split(chars, StringSplitOptions.RemoveEmptyEntries)


                .Select(s => BigInteger.Parse(s))


                .Sum());


    }


}



The only problem is that Sum is not defined as an extension method over IEnumerable<BigInteger>, but it’s a pretty easy to fix problem.




public static class EnumExtensions


{


    public static BigInteger Sum(this IEnumerable<BigInteger> source)


    {


        if (source == null)


        {


            throw new ArgumentNullException("source");


        }


 


        BigInteger total = 0;


        foreach (BigInteger num in source)


        {


            total += num;


        }


 


        return total;


    }


}



Ta-da!



Tuesday, June 21, 2011  |  From Jad Engine Blog

Last week Tuenti (a Spanish social network) launched a Programming Contest, something similar to Google’s Code Jam. I only tried the first two problems during the week (I could blame lots of things, but it all goes down to Baldur’s Gate :p), but I’ll try to do some posts with how I would solve them.

So let’s start with the first problem, adding numbers. This is the problem definition:

“Your amazing program should calculate the sum if the numbers given in each line, and output on line for each question with the response. Numbers can be negative, really big and lines contain extra spaces, so make your program resistant to input.

Your program will need to read from standard input, line by line till the end of the input. Consider each line a different question. For each line you read, output the sum of all the given numbers.”

That’s a one liner in C#, something like this:

public static void Main(string[] args)


{


    while (Console.In.Peek() != -1)


    {


        Console.Out.WriteLine(


            Console.In.ReadLine()


                .Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries)


                .Select(s => decimal.Parse(s))


                .Sum());


    }


}



Just split the line, parse the numbers and sum them, problem done.



But I did not do this in the contest, I used double.Parse instead, and while the code seems similar, there’s a test case where it breaks:



9999999999999999999 1 -9999999999999999999



Which returns 0 instead of 1. Looking at it a little, it’s not an overflow problem (a double range is much bigger), but a precision problem (doubles have only 15-16 digits precision, while decimal goes up to 28-29).



Precision problems have two great things about them:



  • They don’t throw, so you have no clue if they happened or not.
  • You can’t really avoid them.

So, if the contest used a 30 digits test case instead of 19, decimal would have failed too. In the contest I decided to try to avoid this case by adding numbers in a more intelligent (up to debate) way: adding them by pairs of biggest and smaller number.



This is the code I entered:




/// <summary>


/// Entry point.


/// </summary>


/// <param name="args">Program arguments.</param>


public static void Main(string[] args)


{


    char[] chars = new char[1] { ' ' };


    string line;


 


    while (Console.In.Peek() != -1)


    {


        line = Console.In.ReadLine();


        var split = line.Split(chars, StringSplitOptions.RemoveEmptyEntries);


 


        try


        {


            Console.Out.WriteLine(PartialSum(split.Select(s => double.Parse(s))));


        }


        catch (Exception)


        {


            Console.Out.WriteLine("Error");


        }


    }


}


 


/// <summary>


/// Attempts to add an enumeration of number performing partial additions of the 


/// biggest and smallest elements of the collection.


/// </summary>


/// <param name="numbers">The numbers.</param>


/// <returns>The result of the addition.</returns>


public static double PartialSum(IEnumerable<double> numbers)


{


    List<double> list = numbers.OrderBy(n => n).ToList();


 


    while (list.Count > 1)


    {


        double newValue = list[0] + list[list.Count - 1];


        list.RemoveAt(0);


        list.RemoveAt(list.Count - 1);


        list.Add(newValue);


 


        list = list.OrderBy(n => n).ToList();


    } 


 


    return list[0];


}



Nothing too surprising here, just order the list of numbers, sum the first and the last, order again,… Until you have one number left. This solves the precision problem of the test case, although it would fail if instead of –1e19 there were a loooooot of –1 one after the other :p



One could think about performing additions with the full list of numbers and then sort (biggest and smaller, second biggest and second smallest), instead of sorting after every partial sum. Nice thing about sorting after every sum is that it allows to sum a collection of numbers that would give an overflow in the middle of the operations but not in the final result. For example imagine that we have a data type that goes from –10 to 10, and we have the following sum:



6 6 6 -10



The result should be 8, which is a valid number for that data type, and sorting after every sum allows us to sum the collection right. Bad thing is that sorting after every sum is pretty slow, but in programing you always have to accept trade-offs…



So, this exercise was really simple, but what is the corollary of it? Even simple things like adding numbers can be impossible to solve perfectly if the input is idiotic enough. Please spend a reasonable time defending against input problems, but accept that it will be always possible to break things, so don’t go crazy.



Friday, May 06, 2011  |  From Jad Engine Blog

Yesterday I had the opportunity of returning once again to my old university (UAM, Universidad Autónoma de Madrid) to give a talk about how to make a living developing videogames. It wasn’t a technical talk with code and so on, but more a talk about all the things that people should take into account if they want to enter the industry, either in a big game company, or creating their own studio.

I have to admit I was a little nervous at the start, as I haven’t given a talk for nearly a year. In the last years I have been giving like ten to fifteen conferences per year, but last year I was so busy that I just couldn’t do any of them. But I think in the end things turn out pretty well, the material was light, with some very funny facts here and there, so even if it run a little longer than expected, it seemed like people enjoyed it. As usual, I have to correct repeating some words that I tend to say a lot, like “vale”. When I’m speaking and I’m a little nervous I end all the sentences with it :p

So, if you want the slides, they can be downloaded here:

http://kartones.net/files/folders/directxxna/entry51693.aspx

(they are in Spanish)



Saturday, April 23, 2011  |  From Jad Engine Blog

Last week MIX 11 took place in Las Vegas and there was a lot of information for the next release of WP7 codenamed “Mango”. I’m going to talk a little about the new features added to the Phone that could be interesting for XNA developers in a random order (I’ll leave things I like the most for the last part of the post).

First, new or improved APIs. Microsoft has improved Live Tiles, added native sockets, new sensor data,… In general, while a lot of devs will be probably tempted to ignore most of the new APIs, they mean two things for apps: new scenarios that weren’t possible before will be enabled, and Apps will look better in general as they will integrate more with the phone.

Mango_all

The next big thing is the integration of XNA and Silverlight. We will be able to use one inside the other, which means no more UI coding :) UIs should also gain a ton of quality as Silverlight is a very powerful tool for that, and it enables things that just weren’t possible before (for example support for Asian characters, and fonts are rendered much better in SL than in XNA). This also seems to go with the announcement that Silverlight 5 for the desktop will have a new 3D API that it’s a subset of XNA.

Another game changer is going to be the new Garbage Collector. The thing right now is that the GC on WP7 (and Xbox360) is very simple (mark and sweep), which means people should be conscious and careful about garbage and allocations. That is not bad on itself, but the thing is that a lot of the new features of C# are implemented behind the scenes as a compiler transformation that generates a class (garbage). This happens with yield, lambdas (so double for Linq), and the future async.

Which in general means we are living in C# 4, but we write code as if we were in C# 1. The new generational collector should help improving this situation a lot. It doesn’t mean we can just forget about GC, but things are going to be simpler now for us on that front. So far there is no deep information of the new GC (when it starts, and so on), but I hope it comes soon.

And lastly, Microsoft launched a new CTP of the Visual Studio Async that has added support for WP7. Async is a new feature of C# vNext designed to make easier to write asynchronous code. With support of Async, we gain also access to Task and Task<T> (those classes are the core of Async), which existed in C# 4 for the Task Parallel Library but were absent in the WP7 codebase.

The history of asynchrony in the .NET Framework is a long one. Right now the framework supports several models for it:

  • The Asynchronous Programming Model (APM): based on BeginXXXXX and EndXXXXX calls. Like for example in the Stream class with BeginRead/EndRead and BeginWrite/EndWrite.
  • The Event-Based Asynchronous Patter (EAP): based on a method that will signal completion using an event. For example this is used in the WebClient class in the DownloadDataAsync method and DownloadDataCompletedEvent.
  • The Task-based Asynchronous Pattern (TAP): based on the new Task and Task<T> classes, which can represent CPU or I/O bound operations, and blocking or not blocking when waiting for their completion.

One example of the TAP would be:

void MethodThatCallsAsyncOperations()
{
    // Some operations here
 
    Task task = Task.Factory.StartNew(() =>
    {
        this.LongRunningMethod();
    }).ContinueWith(t =>
    {
        this.MethodContinuation();
    });
 
    // Code that will execute as soon as StartNew is called
}

The code in the second comment will be hit as soon as StartNew is called, and then, sometime in the future, when LongRunningMethod ends, the code will continue with the call to MethodContinuation. Nice :)

While this example looks “easy”, the TAP can become pretty complicated when there’s longer code, exceptions,… So Async aims to simply this and transform it into:

void Method()
{
    // Some operations here
 
    this.MethodThatHasAsyncOperations();
 
    // Code that will execute as soon as MethodThatHasAsyncOperations hits the await keyword
}
 
async void MethodThatHasAsyncOperations()
{
    await this.LongRunningMethod();
    this.MethodContinuation();
}

As you can see the async version looks pretty simpe (in fact it looks nearly the same as non-asynchronours C# code), as the way of letting the user know that there is an asynchronous operation is just putting await and marking the method as async. The code that comes after the await keyword will be the continuation to execute when LongRunningMethod ends.

The TAP document I linked before has more interesting examples, and shows the power of this feature and why it would be good for any .NET developer to start getting used to it.

void DownloadAndAddImages()
{
    List<Task<Bitmap>> imageTasks =
        (from imageUrl in urls select GetBitmapAsync(imageUrl)).ToList();
 
    while (imageTasks.Count > 0)
    {
        try
        {
            Task<Bitmap> imageTask = await Task.WhenAny(imageTasks);
            imageTasks.Remove(imageTask);
 
            Bitmap image = await imageTask;
            panel.AddImage(image);
        }
        catch { }
    }
}

For example this code starts downloading a set of images asynchronously, and then, as soon as any image ends downloading it adds it to a panel. Notice how nice is pushing changes into the UI thread, there’s no code needed for that, as it’s handled automatically for you.

And one last thing about asynchronous operations: asynchronous != parallel. Parallelism requires asynchrony, but that is not true the other way around. If you have for example an operation that will have to wait because of I/O (a call to a webservice for example), then you don’t want to waste a thread waiting for that. But if the operations is a super intensive mathematic calculation, off-loading it to another thread will be the right way to go.

And that is more or less everything. There are other very new features in Mango like IE9 or fast-app switching, but for devs, I think the ones I outlined here are the real game changers (or at least they are the most interesting for me).



Tuesday, April 12, 2011  |  From Jad Engine Blog

xboxboxart

Disclaimer: I got a review code for this game, and I have exchanged quite a few tweets with Luke and Ryan.

Here is the game trailer: http://www.youtube.com/watch?v=z9DMlA6r7yQ

DataStream is a game with retro looking graphics, music and gameplay (pretty clear if you see the trailer :) ). The game revolves around lines (streams) where blocks of different colors appear moving at different speeds. You have to move one block that represents you around those lines, avoiding crashing with blocks of some colors and collecting blocks of others. Surprisingly for such a simple idea, DataStream is capable of offering different variations, each one quite interesting in its own.

The first two game modes are what are called Arcade (Easy or Casual, and Original). In this mode, you have to collect Red blocks, while avoiding crashing with green ones. But there are two additions: if a set number of red blocks appear in one stream, that stream is closed (if you crash, a stream is closed too). And if you enter in a blue stream, additional streams appear. Your objective is to score as many points as possible before all the streams get closed (because of red blocks or crashes). And be warned in original that happens pretty fast if you aren’t careful.

The next mode is called Corruption. You have to collect again red blocks while avoiding purple ones. But the objective here is to close all the streams (crashing or red 10 blocks open a new stream, while entering a yellow stream closes one). The gameplay is organized in waves, moving up to the next wave when you close all the streams.

Then it comes Waypoint. In this mode you have to score as much as possible in a set of time. Collecting yellow blocks give you extra time, while white blocks crash with you.

As far as I know, the mode called Glitcher is a homage to the game “Frogger” (I haven’t played it, I wasn’t playing so many years ago :p). Your objective is to move your block to the top of the screen to collect some blocks in a set of time. Every time you clear a level, you get 60 secs, and you have 3 hearts that you can lose because of time running out (1/4 of a heart gives you ten secs) or because you crash (-1 heart).

And the last mode is called Flow Rider. Again a homage to Frogger I think. You have to reach the top of the screen with your block, and then go back to the start. The catch is that once you get to the top, the screen fades to black and you don’t see a thing, so you have to memorize or time your trip back carefully. You have 3 hearts (crashing = –1 heart), and when the screen faces some red blocks appear that you can see and that give you extra hearts (and allow you to see a little where are the obstacles).

In general, the game is solid and with a clear gameplay idea. It’s sometimes frustratingly hard, specially if you try to play with the analog stick. In my first tries with the analog stick I was dying like every 5 seconds. Then I read another review and I discovered I could also play with the D-Pad, which makes the game much more enjoyable instead of an exercise of retro-despair. The graphics are totally old school, just plain colored rectangles, and in general they are serviceable although in some parts I found that the combinations of colors made the game text hard to read (there’s some story for each game mode), although it could be me because I was playing with a projector, not a TV. And lastly, the music is very good, some of the tunes I could be hearing them again and again, I’m gladly surprised on this aspect.

So, not a game I’m going to be playing for hours, but it’s a nice time filler for short moments I have around the day that I’m waiting for something else to happen.



Saturday, April 09, 2011  |  From Jad Engine Blog

(I’m sorry this post is in Spanish, it’s related to a topic that has appeared recently in the Spanish community and I didn’t want to let it go without some comments)

Escribo este artículo en referencia a un post que ha publicado recientemente Luis Guerrero. Luis explica y argumenta porque no hay que hacer pruebas unitarias, pero en realidad sus conclusiones se pueden extrapolar a probar y desarrollar el software en general. Este post tiene como objetivo conseguir que los desarrolladores y las empresas dejemos de perder tiempo y dinero probando y haciendo cosas que ha cierta parte de la comunidad le ha dado por llamar “best practices”.

Yo, en mí día a día, desarrollo software corporativo. De ese que se usa en empresas, que se utiliza para cosas serias, y que mueve el mundo (o eso dicen). Pero también me gusta trastear con otro tipo de software, el software lúdico, los videojuegos. Que quizás no es tan importante, pero es indudable que están mucho más avanzados que nosotros en temas de ingeniería del software.

Voy a coger como ejemplo el Final Fantasy VII de PlayStation, que seguro os suena a casi todos. Por cierto, que si lo que digo sobre su desarrollo no es cierto, pues pensad que hablo de algún Final Fantasy anterior, que para algo hay seis y fijo que en alguno se cumple.

Mirando así un poco la Wikipedia, el Final Fantasy VII se lanzó en el 1997 después de unos 24 meses, con 120 personas implicadas y un presupuesto de 45 millones de dólares. Yo no he currado en un proyecto así en mi vida, pero conozco por oídas alguno de semejante envergadura. Por ejemplo, las implantaciones en la administración pública. Y casi siempre acaban como el rosario de la aurora. Pero en cambio, ¿se cuelga el FF VII? No. ¿Necesita parches cada semana? No. ¿Los usuarios están descontentos? No.

Estoy bastante seguro que en el 97 la gente de Square Enix, ni usaba CMMI, ni pruebas unitarias, ni cuatro niños muertos: usaban beta testers, y punto pelota. Alguno dirá: “¡pero en la administración también hay gente que prueba las cosas!” Claro, pero no están motivados de la forma adecuada. Los testers en el mundo de los videojuegos, reciben la promesa que algún día serán diseñadores, y sinceramente, mola más pensar que en el futuro vas a hacer el World of Fantasy Theft Auto of War, que el Word 3248 con doble Ribbon. Que sean 12 horas al día por cuadro duros da igual si la motivación es la correcta. Menos mal que en algunas grandes empresas del sector aún se aplican estas ideas.

Puede que algún listo piense: “¡pero es que es más fácil motivar a alguien a probar videojuegos!” Y puede que tenga razón, pero no excusa para no intentarlo. Por ejemplo, imaginemos que estamos desarrollando un cliente de mensajería, ¿sabéis el coñazo que es hacer pruebas unitarias a eso, en particular al tema del video? En cambio con un poco de pensamiento lateral, de ese que esta tan de moda en las entrevistas ahora, coges a tus usuarios y les dices: “vais a utilizar este programa para chatear con Pamela Anderson.”

Ni tres segundos tardan en encontrar el botón de la webcam, y si no es así la interfaz esta mal diseñada. Dos pruebas por el precio de una, ademas de comprobar la webcam tienes feedback realista sobre la usabilidad, con lo que nos podemos ahorrar a la persona de UX y sus discusiones sobre si el logo es rojo, burdeos, o granate. Las interfaces que las hagan los programadores, que total con el buen gusto se nace, y junto con lo que pidan los usuarios (quien dijo miedo al cambio) pues todo listo.

De todas formas, no nos engañemos, que alguien pruebe el software es un acierto. Es el camino a seguir. Pero fijaos que destinos más divergentes han tomado ambos tipos de desarrollo.

En el software comercial, vimos que el tema de los testers funcionaba muy bien y los empezamos a usar testers para hacer pruebas en vez de a los propios usuarios. Pero claro, ahora ganamos menos dinero, nos toca la moral que alguien que solo pulsa botones nos diga que la hemos cagado,… Total, que en algunas empresas vamos más allá y nos inventamos el TDD para que las pruebas las haga una máquina, y si la prueba falla pues si no es algo muy crítico la ignoramos y aquí paz y después gloria. Además yo personalmente que soy muy introvertido como buen informático lo de no tener que hablar con otra gente me viene genial.

En cambio, en el mundo de los videojuegos, ¿qué han hecho?

Cobrar a los usuarios por hacer de beta testers.

Tan simple, y tan genial al mismo tiempo. Por ejemplo el Minecraft, un jueguecillo en Java, tiene más un millón de beta testers. 33 millones de dólares se han embolsado, que se dice pronto. Ya quisiera Microsoft que la gente pagara por bajarse el Windows, y más en beta.

Además, el modelo de los videojuegos tiene muchas otras ventajas. Por ejemplo te ahorras el esfuerzo de desplegar máquinas virtuales para probar diferentes configuraciones de sistema operativo, parches,… Con un millón de personas tienes todas las combinaciones de hardware, software, drivers y virus que se te puedan ocurrir.

¿Y qué nos depara el futuro? Pues en el mundo del software comercial hablan que si aspectos, contratos,... En definitiva, más tiempo que vamos a perder los programadores aprendiendo cosas y escribiendo código que no hace nada de cara al usuario.

Y mientras tanto, en los videojuegos, comienza a haber juegos en los que en vez de pagar por participar en su beta, pagas por participar en la beta de una futura secuela. Y nosotros quejándonos de que los comerciales venden humo, cuanto nos queda por aprender.



Monday, March 21, 2011  |  From Jad Engine Blog

Tortoise_cov

Disclaimer: I got a reviewer copy of this book in PDF format.

I have been using source control management systems nearly since I started my professional development career. First with Visual SourceSafe, then a brief encounter with SVN, then a lot of Team Foundation Server, then SVN again, and now Kiln (a Mercurial version). I’m pretty used to them nowadays and I think that they are essential for any developer and any project.

But even for something I think it is so important, my “education” in this type of software has been pretty informal. I have learned most of what I know by trial and error, except about Mercurial that I read quite a lot as I had hear it was a big change from TFS/SVN.

So when I was offered to review a book about SVN, I was pretty interested in the subject. First, it would probably help me (I use Kiln for work, but I have my own personal projects hosted in an external SVN server). Secondly, I have found that a lot of starting devs ignore this subject, maybe because it sounds complicated, maybe because they think they don’t they need it, maybe because reading about it is usually pretty boring, no clue.

This book explains how to work with Subversion, one of the most common (and affordable, a great point if you are starting a company) source control management software out there using the TortoiseSVN windows shell extension. The book is divided in 10 chapters that will walk you from the basics of installing TortoiseSVN or your own Subversion, to how to integrate with tracking systems and anything in between.

The book is written in a way that is very easy to read, with lots of pictures (and by lots, I really mean lots, there are screenshots everywhere) and quite a few examples of very common real life scenarios. Apart from teaching how to use TortoiseSVN, it also gives guidance on general concepts on how to use source control to maximum advantage (like why and how to branch, a topic a lot of devs never touch in detail but that is nice to know). One nice point is that it was reviewed by the lead developer of TorstoiseSVN, Stefan Kung (among others), which means you can be quite confident in the contents of the book.

I think this is an interesting book for beginners in SVN and SCM, as it will walk them along the topic very easily (sometimes it reads more like a tutorial than a book). For experts, there is not much interesting here, probably they could learn a thing or two, but I doubt they would find it very interesting overall. Nevertheless, the book says very clearly in the front page to which people it is aimed.

The only problem I have with the book is that I find it pricy for the content it offers. Around 40$ I think it is expensive for a “extended tutorial”, and at 260 pages it feels short compared to other technical books on the same price range (and more given that this book makes heavy use of images, so word count feels much less).



Tuesday, March 15, 2011  |  From Jad Engine Blog

Three years, three MVP Summits. And for me this was the best one so far. I arrived to Seattle on Sunday pretty tired, but just in time to go the typical Spanish MVP Dinner organized by our great MVP Lead Cristina.

DSC_0015

It was great to meet all the other MVPs and some people who aren’t MVP anymore but live in the area or work in Microsoft Corp. Usually during the summit we have a ton of talks and events so we can’t see much each other. It’s also funny there are other Spanish MVPs I mostly see in Seattle, talk about crazy…

Then, on Monday, the Summit started. For the new people the Summit has two main things: the keynotes from the big bosses (Ballmer and the others) and the deep-dive sessions with our teams (where we usually see new shiny stuff).

The keynotes are always a little hit and miss. Ballmer is a great speaker, so his talk is always pretty fun to see, and the others depend: sometimes the topic is interesting, sometimes you don’t care much and you prefer spending the time doing a little networking.

The real meat is in the deep-dive sessions. This time we spent nearly 4 days reviewing the current state of the platform and getting information of what is coming next for XNA, WP7,… I can’t tell anything as everything is NDA, but believe me, things have look pretty well until now (probably much better than any of us was expecting, numbers of the platform were great), and will look much better in the future. In fact, of the three times I have been in Redmond this is the time I have come out more excited about what the future holds for XNA.

But of course everything is not perfect, and as usual we passed that feedback to the team. They are pretty aware of where they are doing well and where they are doing badly, but opposite to what some people may believe, they don’t have infinite resources at their disposal to fix and do everything we may want or need. They have a pretty big list of things to do, and they are tackling it as fast as they are able to.

Apart from that, another  great part of the Summit is spending time with my peers. As usual I enjoyed the company of all the other XNA/DX MVPs, who are all very interesting in their own ways.

DSC_0016

For example this farewell dinner with (left to right) Yuna, Richard, Michael, and myself (Yuna and myself are carrying the awesome XNA Jackets our team made for us, thanks!). I’m also pretty happy that at least another Spanish XNA MVP came to the summit, the great Inaki from the Simax driving simulator. The sad part is that I missed meeting some very cool people like Benjamin, Charles and Petri that couldn’t come this year.

Another great point of the Summit are the parties. We have a small, private one with just us and the XNA team (sometimes with MVPs from related categories, like WP7 this year). It’s a mix of drinking, and a mix of serious talk in a more relaxed way than in the deep-dive sessions. I had a lot of fun with an USA WP7 MVP whose name I don’t remember at all (he spoke Spanish!), Phil Bourke from Ireland, and Shawn, Nick, John, and Charles from the XNA team, who stayed with us and shared their point of view about a lot of interesting subjects. Thanks a lot people, I know you were super tired and you had to work next day but I am very grateful you stayed with us.

189126_10150142097617359_537457358_8028095_2753511_n

And then there’s a huge final party with all the people in the event (1500 MVPs plus guests). This year it was in the Safeco field, and it was pretty good (although I liked the Garage from last year more). I spent most of the time with the Korean MVPs and then doing the idiot outside with the rest of the Spanish people. There were also some famous baseball players from the Mariners, but well, baseball is not my thing really… Although Dong (XNA/DX MVP, Korea) was really into it:

198980_10150152235772359_537457358_8125723_1854109_n

(last two pictures are pretty crappy, sorry, the difference between a Nikon D90 and the Samsung Omnia camera with 3 extra drinks and no anti-shake…)

So, to summarize, a very interesting and fun summit. I really hope I can go next year, it’s one of the best parts of been a MVP.

And last, if you want the point of view of the Summit from other XNA/DX MVPs, you can check Andy, Catalin, Chris, and George thoughts (sorry if I have missed any of you).


Microsoft MVP Summit 2011 was posted the 03/16/2011 at Kartones.Net.

Wednesday, February 23, 2011  |  From Jad Engine Blog

While reading books about programming is a nice way to improve or get started in subjects you don’t know, the best way by far to hone your coding skills is by writing code. If you are reading this then there’s a good chance you code in your work, but in your works there are probably tons of things you can’t use or play with because you aren’t working on that area, or you aren’t using that version of the tools,… So usually to practice those things what you can do is start your own pet projects, get involved in open source projects,…



But another idea, although usually tied to algorithms or very specific fields, are code katas (cool name for small coding exercices). I continue very lazy and without a clear idea for my own pet projects, but after reading C# in Depth, I just wanted to code a little and try things. I decided to do a search for code katas and I found this nice thread in Stack Overflow with a lot of resources on the subject.



After visiting some of the sites, I decided to go with Google Code Jam. I also decided to start easy, so I took their first recommended problem, the Store Credit problem from the Africa 2010 Qualification Round. This is the problem description:




Problem



You receive a credit C at a local store and would like to buy two items. You first walk through the store and create a list L of all available items. From this list you would like to buy two items that add up to the entire value of the credit. The solution you provide will consist of the two integers indicating the positions of the items in your list (smaller number first).



Input



The first line of input gives the number of cases, N. N test cases follow. For each test case there will be:




  • One line containing the value C, the amount of credit you have at the store.

  • One line containing the value I, the number of items in the store.

  • One line containing a space separated list of I integers. Each integer P indicates the price of an item in the store.

  • Each test case will have exactly one solution.

Output



For each test case, output one line containing "Case #x: " followed by the indices of the two items whose price adds up to the store credit. The lower index should be output first.




I sat down to think a little about it. The problem is pretty simple, but finding those two items in the “naive” way is O(N2). Depending on I that could kill me pretty fast (the problems have to be solved in less than certain time). But for now I decided to ignore that fact. First I would try to code a solution and then if needed I would optimize the search of the two items (I suppose there is also a time constraint to code the solution in the real competition, so I had to find a middle point between both things).



Reading the problem I saw: parsing lines of file, solving a set of test cases, iterating through items to check their prices,… A lot of collections and iterations there, it seemed a good playground to use LINQ. So I went IEnumerable<T> crazy :) First, I needed to create a class to hold the problem data.





public class ProblemData


{


    public ProblemData(int storeCredit, IEnumerable<int> storeItems)


    {


        this.StoreCredit = storeCredit;


        this.StoreItems = storeItems;


    }


 


    public int StoreCredit { get; private set; }


 


    public IEnumerable<int> StoreItems { get; private set; }


 


    public override string ToString()


    {


        return string.Format("{0} / {1}", this.StoreCredit, this.StoreItems.Count());


    }


}



Just a simple class to hold the credit and the enumeration of items for the test case.



After that, I had to read the file. My idea was to yield ProblemData items from the file as soon as they were read. First, I was going to assume I was getting a IEnumerable<string> (the file lines) from somewhere. Given that input, how could I create a ProblemData class?





private static ProblemData PrivateReadProblemFile(ref IEnumerable<string> lines)


{


    var data = lines.Take(3);


    lines = lines.Skip(3);


 


    int credit = int.Parse(data.ElementAt(0));


    string[] items = data.ElementAt(2).Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);


 


    return new ProblemData(credit, items.Select(item => int.Parse(item));


}



Pretty simple LINQ stuff here: the code takes the first 3 lines to one temporal enumerator, then it modifies the enumerator it gets by skipping those 3 lines (that’s why it is passed by ref), and then just parses the items data. I skipped parsing the number of items as String.Split didn’t need that stuff (and I wasn’t going to check for a malformed file like I wasn’t checking for negative credit and other error cases).



So now I needed to fetch an Enumerable<string> with the lines to that method and then yield it. Checking the .NET documentation, I found a great method called File.ReadLines that did exactly this, and the nice thing is that it didn’t try to read the whole file at once, but line by line. So the method I wrote was:





public static IEnumerable<ProblemData> ReadProblemFile(string fileName)


{


    IEnumerable<string> lines = File.ReadLines(fileName);


 


    int numberCases = int.Parse(lines.First());


    lines = lines.Skip(1);


 


    for (int i = 0; i < numberCases; i++)


    {


        yield return PrivateReadProblemFile(ref lines);


    }


}



The problem is that this didn’t work :( I’m not 100% sure, but I think that having two enumerators (data and lines) over the file makes it go crazy (it says the TextReader is already closed, and it makes sense). The two alternatives were to create a StreamReader and use a for loop and the ReadLine method to read the file and parse it normally. Or to hold the whole file in memory using File.ReadAllLines and then start parsing. First approach was much cleaner, but it didn’t add much to my quest of using LINQ, so I went the easy way and just used ReadAllLines.



Now that I had a way to generate the collection of test cases, I had to search the two items that added to the total store credit for each test case. Simply said I needed to test the addition of all the possible item pairs. That’s a cross product and in LINQ you can do that with SelectMany. SelectMany is one of those operators that is horrible to use in dot notation, so I went with query notation first.





public static IEnumerable<Tuple<int, int>> Solve(this IEnumerable<ProblemData> testCases)


{


    foreach (var test in testCases)


    {


        var result = from i in test.StoreItems


                     from j in test.StoreItems


                     where i + j == test.StoreCredit


                     select new Tuple<int, int>(i, j);


 


        yield return result.FirstOrDefault();


    }


}



(I used an extension method because I didn’t want to expose the foreach loop in external code)



This worked, but I didn’t have to do a full cross product as addition is commutative. What I needed to do is to cross each item with all the items that came after it in the list. I could do that with SelectMany too, but not in query notation, so I rewrote the query (uglier this time).






public static IEnumerable<Tuple<int, int>> Solve(this IEnumerable<ProblemData> testCases)


{


    foreach (var test in testCases)


    {


        yield return test.StoreItems


                    .SelectMany((item, index) => test.StoreItems.Skip(index + 1), (a, b) => new Tuple<int, int>(a, b))


                    .Where(t => t.Item1 + t.Item2 == test.StoreCredit)


                    .FirstOrDefault();


    }


}



SelectMany receives first a lambda that is used to generate an IEnumerable<T> for each original item. In my case, I wanted to generate the enumeration of items that came after itself: for the first one, all the items except the first one, for the second one, all the items except the first two,… So I used an overload of SelectMany that receives the index of the item as a parameter for the first lambda, and I skipped my index (0-based) plus one to get the correct collection. The rest of the query was the same: I merged both items in a Tuple, and then tested that they added to the store credit. But to be honest, I’m not sure I would have been able to write this without writing the method first in query notation to guide me.



Lastly, I needed a method to write the solutions to a file. File had a great File.WriteAllLines method that did most of the work for me.





public static void WriteProblemSolution(string fileName, IEnumerable<Tuple<int, int>> solution)


{


    File.WriteAllLines(fileName, solution.Select((t, index) => string.Format("Case #{0}: {1} {2}", index + 1, t.Item1, t.Item2)));


}



Again, I used a handy overload that allowed me to use the index I was working with (this time with Select) to create the right output string with string.Format. So the only thing that remained was merging everything together.





static void Main(string[] args)


{


    Console.Write("Write filename: ");


    string file = Console.ReadLine();


    ProblemLogic.WriteProblemSolution(string.Format("{0}.{1}", file, "out"), ProblemLogic.ReadProblemFile(file).Solve());


}



Easy :) But I did a big mistake reading the docs, and what I needed to print were the item indexes, not the item values. So, I had to change a few things here and there:




  • ProblemData was changed to be IEnumerable<Tuple<int, int>>. I used a Tuple to hold the value and the position, instead of an int for just the value.

  • PrivateReadProblemFile would use also the Select overload with the index to generate the tuples that would be saved into a ProblemData object.

  • Solve would become even more ugly because instead of a Tuple<int, int> to merge two ints, I had to create a Tuple<int, int, int, int> to save both values and their positions. And I needed to add a Select before the FirstOrDefault to build the Tuple<int, int> I was going to return.

And now, time to execute. The code worked very fast with the small and the large dataset, so I didn’t have a need to go crazy optimizing. For example if there were lots of items in each case, maybe sorting them first and then iterating over them in a more clever way could save me some time, but it would be a pain. It would be nice if there was an optimization I could do without having to break my head thinking…



Let’s think a litle about the problem. We are just solving a set of cases, each one totally independent from the next… Mmm, that’s a good sign something can run in parallel. And the nice thing is that while I have very little idea of the TPL and parallel programming in general, I can use PLINQ (Parallel LINQ) very easily in this case.





static void Main(string[] args)


{


    Console.Write("Write filename: ");


    string file = Console.ReadLine();


    ProblemLogic.WriteProblemSolution(string.Format("{0}.{1}", file, "out"), ProblemLogic.ReadProblemFile(file).AsParallel().AsOrdered().Solve());


}



I just had to be careful and tell PLINQ that I wanted to get the results ordered :) After that when running the big file I could see the four cores of my computer working. For the small file, I wasn’t sure if all the thread synchronization was going to take the code run slower than just doing thing sequentially, but I didn’t care much (although PLINQ is pretty clever on when not to run something in parallel, but I would have to profile and take timings to be sure).



And that’s it. Honestly doing this small exercise was pretty fun, it didn’t take much time, and it gave me a feeling of achieving something that sometimes is missing from crazy big projects. I’ll probably be doing more in the future to keep practising (I’m sure a some things can be improved in this one, but I’m pretty happy with the resulting code).



Here follows all the source code just in case someone is interested (minor renamings here and there).



TestCase.cs (was ProblemData)





namespace GCJ2010StoreCredit


{


    using System;


    using System.Collections.Generic;


    using System.Linq;


 


    public class TestCase


    {


        public TestCase(int storeCredit, IEnumerable<Tuple<int, int>> storeItems)


        {


            this.StoreCredit = storeCredit;


            this.StoreItems = storeItems;


        }


 


        public int StoreCredit { get; private set; }


 


        public IEnumerable<Tuple<int, int>> StoreItems { get; private set; }


 


        public override string ToString()


        {


            return string.Format("{0} / {1}", this.StoreCredit, this.StoreItems.Count());


        }


    }


}



ProblemSolver.cs





namespace GCJ2010StoreCredit


{


    using System;


    using System.Collections.Generic;


    using System.IO;


    using System.Linq;


 


    public static class ProblemSolver


    {


        public static IEnumerable<TestCase> ReadProblem(string fileName)


        {


            IEnumerable<string> lines = File.ReadLines(fileName);


 


            int numberCases = int.Parse(lines.First());


            lines = lines.Skip(1);


 


            for (int i = 0; i < numberCases; i++)


            {


                yield return ReadTestCase(ref lines);


            }


        }


 


        private static TestCase ReadTestCase(ref IEnumerable<string> lines)


        {


            var data = lines.Take(3);


            lines = lines.Skip(3);


 


            int credit = int.Parse(data.ElementAt(0));


            string[] items = data.ElementAt(2).Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);


 


            return new TestCase(credit, items.Select((item, index) => new Tuple<int, int>(int.Parse(item), index)));


        }


 


        public static IEnumerable<Tuple<int, int>> Solve(this IEnumerable<TestCase> testCases)


        {


            foreach (var test in testCases)


            {


                yield return test.StoreItems


                    .SelectMany((item, index) => test.StoreItems.Skip(index + 1), (a, b) => new Tuple<int, int, int, int>(a.Item1, a.Item2, b.Item1, b.Item2))


                    .Where(t => t.Item1 + t.Item3 == test.StoreCredit)


                    .Select(t => new Tuple<int, int>(t.Item2 + 1, t.Item4 + 1)).FirstOrDefault();


            }


        }


 


        public static void WriteSolution(string fileName, IEnumerable<Tuple<int, int>> solution)


        {


            File.WriteAllLines(fileName, solution.Select((t, index) => string.Format("Case #{0}: {1} {2}", index + 1, t.Item1, t.Item2)));


        }


    }


}



Program.cs





namespace GCJ2010StoreCredit


{


    using System;


    using System.Linq;


 


    class Program


    {


        static void Main(string[] args)


        {


            Console.Write("Write filename: ");


            string file = Console.ReadLine();


            ProblemSolver.WriteSolution(string.Format("{0}.{1}", file, "out"), ProblemSolver.ReadProblem(file).AsParallel().AsOrdered().Solve());


        }


    }


}


Doing code katas was posted the 02/24/2011 at Kartones.Net.

Friday, February 18, 2011  |  From Jad Engine Blog

conse

XNA developers have to write their games using C# (well I’m sure there are some exceptions using VB.NET, F#,…) and this book is aimed at learning the ins and outs of C# in detail. It is written by Jon Skeet, Microsoft MVP in C#, top user in Stack Overflow, and in general, one of the most knowledgeables people out there for the language. Oh, and it’s foreworded by Eric Lippert (read his blog if you don’t do it already).

The book is divided in five parts, one introduction, another part for each C# version from 2 to 4, and some apendix with bits of info on LINQ operators, collections,… The “introduction” chapter is more like an overview of the rest of the book plus a refresher of some basic concepts, although it’s asumed the reader is comfortable with C# 1. While in theory you could read this book knowing C# 1 very well to learn what was added in C# 2 to 4, that would be a pretty hard road to follow, as there’s a lot of material to shallow and in some spots it gets pretty hard (like when talking about how type inference was improved in the compiler for C# 3). Having a little familarity with some of the most important topics (generics and delegates/lambdas) will make the rest of the book much easier to understand.

I’m not going to review the chapters about the major and minor features added to C# in each of its versions, as I would be just listing a set of features (C# 2: nullables, generics,… C# 3: extension methods, expression trees,… You get the idea), instead, I’m going to focus on how the overall book is written.

First, Jon writing style is easy, simple, and direct to the point, even when talking about complex subjects. For example he makes a big effort explaining what every term he is going to use is going to mean if there can be various interpretations (or when he doesn’t follow the C# specification terminology). And he has some light/fun/… details and comments here and there, but he doesn’t go overboard with them.

Second, Jon makes a lot of comments about writing code that it’s easier to understand, to read, to maintain. He shares a lot of his personal preferences and tips and tricks for writing clearer code. But at the same time, he explains both sides of the story, even if he likes more one than the other. He doesn’t try to impose his ideas, but he tries to make you think about how you write code.

Third, there are also lots of short pieces of code in every chapter so you aren’t left reading pages and pages of text without an example, and when those examples are artificial he notes it. Also, Jon realizes something most technical writers fail to see (or they see but do little to fight): reading source code in a book sucks. Books have pretty pretty “short” lines, so formating big code blocks gets crazy pretty fast. Jon writes all the source code in the book as C# snippets that can be easily understood, but aren’t cluttered with useless lines (using directives, Main method,…), but at the same time he has developed a tool (Snippy) that allows to copy and paste those snippets into it and run them perfectly, without you having to do the work manually in VS. It’s a minor detail, but it adds to the overall feeling of quality and labour of love that this book shows.

And last, Snippy, and lots of other of other content (like all the book examples source code) can be found in the website companion for the book: http://csharpindepth.com/ Specially interesting is the Articles section, where Jon talks in more detail about some topics that he can’t cover in the book (in general the book has a lot of references to internet sources for more explanations or deeper insight in a given subject).

So, to close the review: if you are interested in knowing C# well, I can’t but recommend you to read this book. And even more for XNA devs, that while we may not be able to use some of the things explained here, we need to know the language and compiler we are working with in more detail than most devs to make the most out of it and avoid some performance pitfalls.



Friday, February 11, 2011  |  From Jad Engine Blog

Given that I continue a little stuck in writing technical posts, I’ve decided I’ll be adding some reviews to the blog that may be of general interest.

For example, I have been expending quite a lot of time lately reading books, so I will be commenting on some books related to .NET, XNA and game development in general.

The first book that will come will be the incredible C# in Depth, 2nd Edition from the great Jon Skeet. I have nearly finished it and honestly, this book should be mandatory for anyone who wants to write serious C# code. It’s full of great technical details and explanations about the features of C# 2.0 to C# 4.0.

And the second one will be Tortoise SVN 1.7 Beginners Guide. This one may seem stranger at first, but anyone that is half serious about writing games (or software in general) should be using some type of version control software, and among the free (or very cheap) ones SVN is the clear winner. So this book review will be for all those amateurs that are starting and need a little help on the topic. I have some others in my “to-read” list (about XNA 4.0 and WP7 for example), but those will come later.

I will be also talking about some games of XBLIG, and I will start with some of the great games from radiangames. I know he has moved now to Unity and has left XBLIG, but honestly his work in the channel should be used as inspiration for other creators, and what people should try to achieve in terms of quality.

And lastly, as a MVP, I get software from some companies in the form of NFR (no-for-resale) licenses. It’s only fair that I give them a review, after all some of these products can cost quite a lot. As with the books, they will be related to game development (for example, obfuscators for .NET code, installers,…).


Doing reviews was posted the 02/11/2011 at Kartones.Net.

Tuesday, January 18, 2011  |  From Jad Engine Blog

Well, not a great year for the blog really. My real life has been very busy with my daily work at C Tech, my university classes at ESNE, and the Xbox360 project Iredia: Atram’s Secret. I’ve been so tired of programming that I didn’t feel like posting anything in the blog.

Also, my big project for 2010, the flexible RPG library using dynamic started great, but then as I started defining things in more detail I found problems at every turn, probably because I was over-engineering and trying to do something TOO flexible, and because I was perverting C#, dynamic and DynamicObject too much. The lesson here is that if you want so much dynamic typing, go and use a dynamic language. I’ll probably retake that project in the future, in a less crazy scope, more suited for C#, although I’m happy about all I’ve learn about dynamic and expression trees with it, it has come useful in other situations at least.

In the end part of 2010 I started a new project with a new MS technology called Visual Studio LightSwitch, which has surprised me a lot. The tool feels great for creating data entry/editing apps, the support of MS in the forums is great, and there’s quite a lot of documentation and tutorials. I hope they release a new beta or CTP soon and that MS doesn’t end killing it in the end. I’ll probably post some details and images of the app in the future, when I feel it’s a little more polished and complete, and I’ll post some comments on LightSwitch.

And what about XNA? Well, the 4.0 version has been all about bringing XNA to the Windows Phone 7 (and a big cleaning of the API needed for this), but I haven’t played much with it because the tools work so-so in my computer and I wanted to try it in a real device. But until 2 weeks ago I didn’t have a winphone (I got the Samsumg Omnia 7 and I love it so far). Now that I have a phone I have started my first official project in XNA 4.0: a Fire Emblem clone (I just love japanese Tactics-RPG mixes, like Fire Emblem, Front Mission, Final Fantasy Tactics,…).

I suppose the project will advance very slowly, but well, I’m not in a hurry really. My idea for this year in the blog is first and as expected try to post more often, and not only about XNA development but also about the XBLIG world. After my involvement with Iredia, the situation of XBLIG placement in the Xbox360 dashboard, and the XBLIG Winter Uprising, I have started expending more time in the Xbox LIVE Indie Games section of the App Hub forums. Also I am far better informed since I started following George Clingerman (among other MVPs and creators), which tweets a lot of interesting things about XBLIG and the community.

I have come to the conclusion that while as an XNA/DX MVP I am supposed to be a technical leader, in XNA one of the main things I should also do as a MVP is support the XBLIG community understanding their concerns about the platform and the distribution channel. Because honestly, the technical part of XNA is simply amazing, there are few complains there. But things could be improved in non-technical areas for the PC, Xbox360, and WP7, and I’ll try to talk more about this in the blog and try to help giving more visibility to outstanding creators.

Let’s see if I deliver or not :)


Summary of 2010 was posted the 01/18/2011 at Kartones.Net.

Thursday, December 16, 2010  |  From Jad Engine Blog

(personal post, nothing related to XNA, .NET or game development here, sadly those posts will have to continue waiting, I’m stuck about writing technical stuff)



I have recently spent two weeks of holidays in Vietnam. Before the trip I was a little worried because of several things: I’m very picky with food, it was my first organized trip and I didn’t know what to expect (I like travelling on my own), it was my first trip to a country in development,… But in the end my fears were unfounded: the food was plenty and tasty, the travel group and our tour leader were great people, and Vietnam was an amazing place to visit. The country is beautiful, and the people hard-working, moving away from the ghosts of war.



But not everything was pretty, and there were some sad stories that crossed my path during this trip. One of the most touching happened in Sapa, a region in the north of the country.



That day our tour leader had organized a visit to a small village of one of the many hill tribes that live in Vietnam, the Red H’mong or Red Dzao, I can’t remember. Once we arrived there by bus, we were “greeted” by a swarm of women. I say “greeted” because what they were doing is deciding to which one of us they would try to sell their handcrafted goods, so they could organize and all of them didn’t try to sell to the same tourist, but split in small groups trying to get more sales from us. This is pretty common in the area of Sapa, and it had happened to us before around the hotel, so I wasn’t very surprised, although as usual it felt a little tiring knowing you would have someone following you during all the trip trying to sell you something again and again.



I got two old women, who sadly couldn’t talk much English, so even if I tried to get some information from them about their lives, I got nothing. They weren’t too friendly either and they made little effort to communicate, so after a while I decided to shut up and concentrate on taking pictures and enjoying the sights while ignoring the women shadowing me.



Once we got to the middle part of the trip, just before going back, the moment came to start bargaining and buy something from the women that had selected you. I got one thing from each woman and tried to get out of the group of people, because as soon as I said I was going to buy something, other people started saying that I should also buy from them because they had also talked with me during the trip or because they were friends. I started to argue and after getting a little angry and raising my tone, they stopped their complaints.



Then, I heard a voice joking at me: “it’s not true that you only talked with those two, but you can’t really buy from everyone.” I was taken totally by surprise: the voice was clearly Vietnamese, but it was the first time I have heard anyone from there speaking English so well. I turned around to find a girl in her early twenties (later I found she was 23). She was dressed as the older women, and she was also carrying a basket full of hand-made goods.



I presented myself and asked the girl name: Pah Me (no clue about the spelling, but as she said the pronunciation was like the start of “Pamela”). I started talking with her, asking questions about her people and their ways of living. She was pretty easy to talk with, and eager to answer all my questions. After a while we started talking about other topics, and one thing appeared clear to me: the girl was pretty clever, it seemed a pity that she had to stay in the village farming rice and trying to sell her goods to tourists. So I decided to ask her if she had plans to go to university or any other form of higher education.



Then she told me that she couldn’t afford it, and that only one of her friends had been able to go to university, thanks to an Australian couple who had paid her the costs for studying. Intrigued, I asked her how much it was.



50 US dollars. A year.



I stopped right on my tracks after hearing the amount. I was probably carrying enough dollars with me at that moment to pay her a 4-5 years degree. It was tempting to say: “hey, take this, study and get outside this place”. Sadly things don’t work like that. After that moment, we continued chatting. For the girl it was business as usual, but for me things had turned more serious: the money I earn in a day could totally change the rest of the life of the person I was talking with, it was brutal to face the fact so directly, even if inside me I was aware of that reality before coming to her country.



Near the end of the way back, I told her I was going to buy something from her, and that I was really glad for all the things that she had told me. I really think that was a much better way to try to sell something to a tourist, although probably not all the women could speak English well enough for that. She started showing me her goods, and I got interested in a piece of cloth that women over there use for their weddings. I have a friend that is getting married next year and it would make a great gift for his girlfriend, so I asked for a price.



When I said that, she changed a little her way of speaking, resembling more the old women. It was a pity, because I thought that after the nice chat and asking myself for something to buy there was no need for using “merchant-speech”. She also said a pretty “high” starting price, but I had decided in advance not to bargain, so I accepted it without a word, although I was a little sad about how the whole situation had developed. I think she realized a little too late that there was no need for those things, as she suddenly offered me to take something else for free. I smiled at the gesture, appreciating its meaning.



After that, I asked one of the people of our group to take a picture of both of us and we continued exchanging stories until we arrived to the bus and I departed the village.



DSC_0501



I have many other happy and sad memories of this trip that will travel with me for a long time, but I hope I will never forget this particular event. It remembered me how privileged I am for having the life I have, and how little I value it sometimes.



Memories of Vietnam was posted the 12/16/2010 at Kartones.Net.

 Jad Engine Blog News Feed 

Last edited Jun 5, 2007 at 10:57 PM by ReedCopsey, version 3

Comments

No comments yet.