Quantcast
Channel: Sam Saffron's Blog - Latest posts
Viewing all 150 articles
Browse latest View live

I fixed up Video Browsers sort by date

$
0
0

The “new release”:/downloads/1 of Video Browser contains the following bug fixes:

  • Added support for mov and rmvb movie types
  • Dim the list box when moving to sort order
  • Fixed sort by date to choose the earliest date of any of the movies in the directory (for movies only)

Let me know if anything is broken.


You should be very careful when using ActiveRecord eager loading

$
0
0

Update this issue is resolved in edge rails, if you are using a version number higher than 2.0.2 you should be safe

ActiveRecord eager loading is a force to be reckoned with. Recently, I had to write some reports that span many models and thought I would save myself some database round trips by eager loading the data.

What is eager loading you ask?

The ActiveRecord “documentation”:http://api.rubyonrails.com/classes/ActiveRecord/Associations/ClassMethods.html says: “Eager loading is a way to find objects of a certain class and a number of named associations along with it in a single SQL call. This is one of the easiest ways of to prevent the dreaded 1+N problem in which fetching 100 posts that each need to display their author triggers 101 database queries. Through the use of eager loading, the 101 queries can be reduced to 1.”

Wow, that sounds fantastic, if I have a post with 100 comments I can load it all up in one round trip to the database. But there is a catch here, the current implementation of eager loading leaves a lot to be desired.

Imagine if you have a post with 100 comments and 100 images. If you eager load your data the SQL produced may surprise you:

Post.find(:all, :include => [:comments, :images])

ActiveRecord will generate a single query that left joins comments post.id and left joins images on post.id .

The problem is that the database will give you back 10,000 rows for a single post. That is seriously flawed.

So, imho a patch is needed here, this feature is just too dangerous to have in rails.

h2. The solution

Well the code needs to be a little more sophisticated. The most efficient way of retrieving all the posts with all the images and comments is.

Get all the posts

Get all the comments

Get all the images

update all the post models with comments and images loaded in 2/3

So, when eager loading, I think, the best way is executing a query per model when dealing with has_many associations. For has_one and belongs_to the current implementation is alright.

You should be really careful with eager loading in the current ActiveRecord, it can take out your “production servers”:http://toolmantim.com/article/2006/9/29/web_connections_eager_loading_headaches and cause huge memory spikes.

Speed up your feed generation in Rails

$
0
0

The problem

When I first launched this blog I decided to benchmark how long it takes to generate various pages on my web site, I have found that the two slowest pages to generate are my atom and rss feeds. They take on average 150ms to generate.

So, I decided to dig down and figure out why this is so slow. After watching the Railscast episode about request profiling I decided to try it out on my atom feed.

So, I installed the ruby-prof gem and created a little script (“get ‘/posts.atom’”) and ran it 30 times:

script/performance/request -n 30 lib/profile_atom_feed.rb  

Next up I dug into the output (which is in the rails tmp folder)

I discovered there were a TON of calls being made to the function String#to_xs, they all seemed to be originating from builder and they were taking more than 50% of the time. (It turns out profiler lies and they were taking more than 90% of the time)

image

Let’s look at the source code in builder:

 
class Fixnum
  XChar = Builder::XChar if ! defined?(XChar)

  # XML escaped version of chr
  def xchr
    n = XChar::CP1252[self] || self
    case n when *XChar::VALID
      XChar::PREDEFINED[n] or (n<128 ? n.chr : "#{n};")
    else
      '*'
    end
  end
end

class String
  # XML escaped version of to_s
  def to_xs
    unpack('U*').map {|n| n.xchr}.join # ASCII, UTF-8
  rescue
    unpack('C*').map {|n| n.xchr}.join # ISO-8859-1, WIN-1252
  end
end
 

Ok, so string_xs is a method for turning a string into an xml safe string. It will call Fixnum::xchr for every letter it is passed. Builder needs to ensure that all the text it renders is xml safe. So there you are, once per letter (barring xml tags) in my atom feed, a call is made to Fixnum::xchr, this involved a complex bit of lookup logic in ranges. Nothing is wrong with this code, but it does involve looking up a value in up to 2 hashs (CP1252 and PREDEFINED) and 1 range lookup (VALID). This all adds up, especially if you have a big rss document.

I explored some mini optimizations:

This monkey patch

 
class Fixnum
  alias_method :xchr_old, :xchr  

  def xchr
    @@XChar_Cache ||= (0..255).map{|x| x.send :xchr_old} 
    @@XChar_Cache[self] or xchr_old 
  end 

end
 

Gives me 2X speed improvement. I suspect that with a bit of Ruby fu you could get this down to a 4x speed improvement. But… I decided to Google a bit.

h2. The solution

The easiest thing to do is


sudo gem install fast_xs 

This makes my feed generation 10x faster. What it does is natively implement String::to_xs, the good news is that rails 2.0.2 and later, is aware of this patch and all you need to do is install the gem and restart your rails app.

Here is a screenshot of the state of affairs after the patch.

image

Running IE6 on Windows Vista

$
0
0

I have taken a break from running Linux on my desktop today. I had to, my raid adapter does not like Linux. It likes crashing Linux and making it make weird loud noises. So, I’m regrouping and seeing if there is a different solution to my problem.

My current project is an intranet web development project. The target is IE6. Well, designing web “2.0” applications which target IE6 means you better “test on IE6”:http://www.positioniseverything.net/explorer.html and often.

Microsoft’s “solution”:http://blogs.msdn.com/ie/archive/2006/11/30/ie6-and-ie7-running-on-a-single-machine.aspx is; run a virtual PC.

But.. my virtual PCs do not do “unity mode”:http://www.vmware.com/products/beta/ws/releasenotes_ws65_beta.html that well yet.

Except for one virtualization solution. Enter “colinux”:http://www.colinux.org/ and “andlinux”:http://www.andlinux.org/ :

IE6 on Vista

IE6 running side by side as a proper native window on Windows Vista. Now, if I can only get cut-and-paste to work.

The basic steps I followed:

apt-get install wine
 
tar -xzvf .... 
./ie4linux --no-gui

We are done. Now we can bask in the glory of “Internet Explorer 6”:http://www.google.com/search?q=i+hate+ie6.

Video Browser: Past, Present and Future.

$
0
0

h2. The Past

About a year ago I decided its time to say goodbye to my xbox media center. It served me really well, it played most of the videos I had, it had a really nice and simple interface and was pretty stable. But.. it was far from perfect, the xbox is really noisy and it did not play quite a few of the formats I had cause it did not have the power. It had no support for TV recording and would occasionally hang.

So, I thought, Ill buy a Mac mini and try it out, its pretty quiet, has the grunt to play almost any video format, has fast networking and you can plug a USB tuner in to it. It took me a few months to give up on OSX as my media center. Front row is nice, it has movie preview support, you can get it to play almost any format but it had quite a few issues which I just could not sort out. Apple made too many really annoying decisions for me. Can I sort my videos by date? Apple says no. Can I have integrated TV tuner support? Apple says no. Can I extend and develop on top of the “front row” platform? Apple says no. And on top of all of these issues I was having codec problems, stuff would work great on VLC and far from great on front row.

I did not give up that easily though: I tried “Center Stage”:http://centerstageproject.com/ which did not work. I tried “iTheater”:http://www.itheaterproject.com/ which was way too alpha I bought “MediaCentral”:http://www.equinux.com/us/products/mediacentral/index.html which I gave up on quite quickly, I remember trying to explain to their support staff that you should be able to fast forward through AVIs (at least in 30 second skips), I lost that battle.

h2. Vista Media Center

So… I went back to the drawing board. I installed bootcamp and then installed Vista and got Vista Media Center going. My Terratec USB tuner that worked fine in OSX kept on crashing Vista. I bought a “new USB TV tuner”:http://www.digitalnow.com.au/product_pages/tinyusb2.html and it seemed I was back in business.

Some people think Vista MCE is “the best product ever”:http://www.codinghorror.com/blog/archives/000784.html. I think its the crack talking. When I first used Vista Media Center I was shocked. Why the hell do you need such a complex initial menu. What is the logical relationship between “pictures and videos”:http://www.winsupersite.com/images/showcase/vista_mce_01.jpg , isn’t a movie a video? I think Microsoft did a good job with the TV recording piece, I also think they did a good job with the configuration screens. The initial layout is all wrong and the weakest piece they released was the “Browse Videos” piece. I just couldn’t believe it, It seemed it was constantly trying to cache thumbnails, why, I don’t know. No metadata, no dvd support “without a hack”:http://thegreenbutton.com/forums/1/211911/ShowThread.aspx, no advanced sorting options, no way to view your items as a list.

But VMC did one thing much better than Apple. It allowed developers to write plugins for it. Now, the VMC platform is ghetto. But at least it exists. My first shot at video browser was in XAML, this version was very short lived, cause Microsoft decided that to “deprecate”:http://discuss.mediacentersandbox.com/forums/thread/6623.aspx XAML development on Media Center.

So, I downloaded the Vista MCE SDK and started playing around. When I mention this is ghetto, I mean it. You have no drag and drop GUI designer. The APIs are complicated, and do get some things to happen you have to revert to nasty “reflection hacks”:http://discuss.mediacentersandbox.com/forums/thread/6011.aspx . On top of that you never know when some big shot in Microsoft is going to decide to kill off MCML and build it on top of WPF. I think its bound to happen one day.

Digression aside, I got my plugin out and a few iterations later I was getting 100 downloads a day.

h2. Can I make money out of this?

My current day to day work is on a fantastic ArchLinux machine, I write Ruby and Ruby on Rails code in Vim and it keeps me happy. Video Browser has always been a hobby not an income stream. So I thought, what the hell, I’ll just open source this thing and see what happens. I don’t need to make money out of this project. I can just wack this on my resume and if someone asks for some example code, I can show them Video Browser.

h2. GPL

I decided to open source Video Browser under the GPL. I was thinking about an BSD style license and still in some way would prefer it, it fits much better with all my Ruby work. But no devs stepped up and asked for it. The biggest reason for going with GPL was that the “other”:http://code.google.com/p/open-media-library/ open source VMC project was GPL and I wanted to be able to share code with them.

I have been really lucky with this whole open source experience. From go “Jas”:http://blog.manghera.com/ decided to join the efforts and gave us a whole new look and feel. The “community”:http://videobrowser.ch that has formed around this product has been really great and there has been a lot of positive feedback. We have also made a huge amount of progress in a really short amount of time.

h2. The Future

I use Video Browser every day, I’m very happy with it, but there still is ton of stuff left to do. I wish we had auto update functionality and you didn’t have to use the keyboard every time you want to get the latest version. I think navigation between the content area and the sort options area can be improved and made more intuitive. I think it would be awesome to support online content and movie trailers. Maybe a video browser like application for music would also make sense.

One of the biggest areas which would make it much more compelling for first time users would be an integrated metadata story. At the moment there is very clear distinction between metadata and browsing, you collect all your metadata for movies using windows apps and collect all your metadata for tv shows using a ruby script. This is really complicated for many users. Personally, I love the underlying architecture where the metadata is stored as close as possible to the videos, that has to stay. But I think that if somehow a complementing MCML app with some internal hooks could gather the metadata without having to use ruby or a windows app it would make Video Browser much more appealing for first time users. Imagine and “Add DVD” menu strip that will rip the movie and add metadata for it at the same time. It would be pretty cool.

In no way do I mean to knock the existing metadata tools that we have, I think they are awesome, its just that they are targeted at hackers. (Especially the ruby script I wrote)

In future, I hope we can make Video Browser faster and keep the bug count really low, I hope it can stay simple and provide a really compelling story for first time users. When people come over to your place and have a look at Video Browser I want them to say “Wow … that looks so easy, I wish I had something like that”

SSH Jumphosts tunneling and other curiosities

$
0
0

The IT department is in love with their brand new proxy server. They are so in love with it that they decreed that all connections to the Internets must go via the proxy server, no exceptions.

My brand new web 2.0 intranet application must also live behind this proxy server. I think it makes perfect sense, except for one minor issue. How do I support this thing? I discussed my solution with the IT department and they were fine with it. So here you go.

h2. Corkscrew

“Corkscrew”:http://www.agroman.net/corkscrew/ is a tool for tunneling through http proxies. Its a little bit like netcat which in turn is a little bit like telnet. It connects to a proxy and gives it your input stream in base64.

Corkscrew and ssh are friends, all you have to do is add the following line to the ~/.ssh/config file

 
Host * ProxyCommand corkscrew http://user:pass@theawsomeproxy 8080 %h %p

And magic, you can start sshing out of the firewall, through the proxy server.

h2. Setting up a stable reverse tunnel

“Quite”:http://www.revsys.com/writings/quicktips/ssh-tunnel.html “a”:http://wiki.mt-daapd.org/wiki/SSH_Tunnel “lot”:http://gentoo-wiki.com/TIP_SSH_Reverse_Tunnel has been written about SSH tunneling. The concept is fairly straight forward, using SSH and a few switches you can either push a port from your local machine to a remote host (reverse tunnel) or pull a port from a remote machine locally (tunnel).

I wanted to be able to ssh into the firewall from the internets. So I had to push port 22 on my intranet server to my public server.

The command to do this is fairly trivial, something along the lines of the following will do:

 
ssh -R 8888:localhost:22 public_server 

Once this is done I can ssh into public_server on port 8888 to access ssh on my server which is behind the firewall.

The big problem is that ssh tunnels behind NATs and Proxies die. Death of the ssh tunnel means I no longer have remote access, which means I have to drive to the client to fire it off again. This is obviously not acceptable.

So the internet has quite a few solutions to this problem, the most popular one seems to be a tool called “autossh”:http://www.harding.motd.ca/autossh/.

This tool will restart your tunnels if they die. I set this tool up and it seemed to work fine for a day or two. But somewhere between me not configuring the tool right and the almighty proxy, autossh seemed to fail on me. So, I decided that instead of spending a week debugging this I might as well write a simple script in ruby to monitor my ssh tunnel.

h2. Super SSH

I chucked my “in progress script”:http://github.com/sambo99/super-ssh/tree/master on github for your enjoyment. What it does is fairly simple. It forwards the ssh port to the public server and tries to connect back to itself from the public server every minute. If the connection fails it assumes something went wrong and it will restart the tunnel.

It still needs a bunch of command line switches and a fair bit of work, but it seems to be doing the job for me.

A key to getting anything along these lines to work is ensuring your ssh connection uses public/private keys for authentication. If you use ssh on a regular basis and passwordless ssh authentication sounds foreign, stop reading this and look it up on the internet. The two commands, “ssh-keygen”:http://www.openbsd.org/cgi-bin/man.cgi?query=ssh-keygen and “ssh-copy-id”:http://www.math.ucla.edu/computing/docindex/openssh-man-6.html are your friends, use them.

h2. Working from home, jumphosts

In order to deploy my application from home I need to connect to the server behind the firewall via my public server. I can always ssh twice, but it seems like a little bit of a headache.

I wanted to type “ssh work” and have my dev box connect to the production box. I also wanted http://localhost:9999 to take me to the production web site, so I can test it.

Here is how I achieve this with my ~/.ssh/config file

Host work
   ProxyCommand ssh public_server nc -w 1 localhost 8888
   LocalForward 9999 localhost:80

The ProxyCommand directive chains my ssh tunnels (using netcat), the LocalForward command forwards a remote port locally.

Hope this helps someone else out there.

Deadlocked

$
0
0

I was reading “this article”:http://www.codinghorror.com/blog/archives/001166.html about some recent deadlocking issues Jeff Atwood was having with the brand new “Stack Overflow”:http://www.stackoverflow.com site. In this article Jeff concluded that the WITH (NOLOCK) hint is a practical solution to many deadlocking conundrums which though, in theory, is very dangerous, in practice, solves his problem.

I think this is almost always the wrong conclusion. I think that in practice it is very dangerous.

h3. Why SELECT statements sometimes deadlock against UPDATE statements

The theoretical explanation is this: regardless of explicit transactions SQL Server tries to stay Atomic, Consistent, Isolated and Durable. The default isolation level for all SELECT statements is READ COMMITTED. To achieve this SQL Server uses “locks”:http://msdn.microsoft.com/en-us/library/aa213039(SQL.80).aspx .

SELECT statements acquire shared locks on various resources. These resources may be pages in the database where the data is stored or keys in indexes and so forth. These locks may or may not be held for entire duration of the SELECT statement depending on various circumstances. Most of the time MSSql needs to acquire more than one lock to proceed with a portion of the SELECT statement. For example, it may need a shared lock on an index AND a lock on a page in the database to return results. In such cases MSSql determines the order of locks it needs to acquire and acquires them in a what appears to be linear fashion.

And this is where all the trouble starts. Say we have 2 database connections:

# Connection 1 : UPDATE statement acquire an exclusive lock on a key on index #1 # Connection 2 : SELECT statement acquires a shared lock on page #1 in the database # Connection 1 : UPDATE statement attempts to acquire an exclusive lock on page #1, since someone else is already holding a shared lock, it starts waiting. # Connection 2 : SELECT statement tries to acquire a shared lock on index #1, since someone else is already holding an exclusive lock, it start waiting. # MSSql figures out that we have a deadlock, kills the SELECT statement and raises an error message.

If you can’t believe this is possible here is a demo:

Run the following code snippet in Query Analyzer:

create table posts (id int identity primary key, [content] varchar(7000), [group] int, date_changed datetime) 
create index idx_date_changed on posts ([group], date_changed) 

insert posts values ('post contents', 1, getdate())  

declare @i int 
set @i = 1
while @i < 5000
begin
 insert posts values ('post contents', @i, getdate()) 
 set @i = @i + 1 
end 

Open two Query Analyzer windows, in the first type:

set nocount on
declare @i int 
set @i = 1
while @i < 500000
begin
        -- use a temp table to avoid filling the query analyzer window with results 
    select * into #t from posts 
    where [group] = 1 
    drop table #t 
end 

In the second one:

update posts 
set [date_changed] = getdate() 
where id = 2 

Start the first query. Execute the second query a few times (it may be once, may be 20 times). Look at your your messages on the SELECT loop, you should see something like the following:

Msg 1205, Level 13, State 51, Line 7
Transaction (Process ID 57) was deadlocked on lock resources with 
another process and has been chosen as the deadlock victim. 
Rerun the transaction.

So, you may ask yourself whats going on.

First lets look at the execution plans:

select * from posts where [group] = 1

The execution plan for the SELECT statement reveals that we will first look up some records in the idx_date_changed index and then look up more data in the clustered index and join it up. The key here is order we are looking up data in the two indexes.

Next we should look at the execution plan for the UPDATE statement:

update posts set [date_changed] = getdate() where id = 2

The execution plan for the UPDATE statement reveals we first perform a clustered index seek on the primary key and then an index update on the idx_date_changed index.

Notice that both queries look up the data in the indexes in the reverse order, which gives us more chance of having deadlocks.

A handy trick when debugging deadlock issues is determining which locks each statement acquires. To do this you can hold the locks and wrap it in an open transaction

begin tran 
update posts with (holdlock) set [date_changed] = getdate() where id = 2 

Then in a second window you can execute sp_lock which returns the list of active locks.

Remember to commit that transaction later on…

In real life the first step is determining the two statement that deadlock, “so you should read the following KB”:http://support.microsoft.com/?kbid=832524

h3. What can happen if you use the NOLOCK hint

Well the worst case scenario is that you may, once in a while see phantom data, duplicate data or have a bunch of missing rows. In a most web apps this can be acceptable, but it can look very unprofessional.

If you used the NOLOCK hack in a banking application you would probably get fired on the spot.

A real example could be some database maintenance that requires a big fat transaction that say removes all the rows from a a table and then adds the rows back in. In this case, users may get a blank front page on your website. There are lots of less subtle issues that may pop up in the e-commerce world such as: billing a person twice, billing the wrong amount, adding a wrongly priced item to a basket etc…

h3. So how do you really fix these kind of issues.

The cowardly yet honest answer to this is: “it depends”. First things first. Reproduce the problem in an isolated test harness so you have something to work with. Without this you will be shooting in the dark.

Here are a few approaches that may help.

  • Optimizing Queries

Are you joining to an unnecessary table? Are you returning too much data (too many columns or rows)? Is your query performing table scans? Can you restructure the query in such a way that it no longer deadlocks? Are you using a left join where you should be using a join? Have you reviewed your NOT IN clauses?

  • Optimizing Indexes

This ties in to the previous point, having too many, too few or wrong indexes can hurt performance. Are there any missing or superfluous covering indexes? Are there any missing indexes? Are there any duplicate indexes?

  • Handling the deadlocks

You may not be able or willing to resolve every deadlock you have. In such cases consider handling the deadlocks in the app, logging the fact it happened and retrying. These logs will be a goldmine when you go about designing your next version.

  • Caching

Perhaps you are executing a lots of the same queries very frequently. Cutting down the number of times that query executes will reduce the chances of deadlocks occurring. This may or may not create issues with you serving out stale data depending on the caching architecture. The caching can be implemented either in the database (using temp tables or tables) or in the application; in-memory or on disk. It really depends on the problem. In web apps you may sometimes want to cache whole pages on disk or in memory.

  • Application architecture review

An application architecture review may point out that a query that is deadlocking is being called 10 times per request when it should only be called once. It may reveal that the data is not really required or that the same feature is implemented efficiently in a different code branch.

h3. Final words.

In Jeff’s post he says: “… in practice adding nolock to queries that you absolutely know are simple, straightforward read-only affairs never seems to lead to problems.”. Well, I don’t think so. Adding nolock anywhere may be hiding a fundamental design flaw in your software. It may cause very weird bugs. nolock should be avoided whenever possible and turning to snapshot isolation may not always be an option or a solution to the problem.

My server just died, long live my new VPS

$
0
0

A few months ago I set up an automatic backup script on my debian server. It’s job was to send incremental backups, daily, to Amazon. This script ran a few mysql dump statements and a subversion dump statement. Once the data was copied locally, I used “duplicity”:http://duplicity.nongnu.org/ to backup the data incrementally to a local dir and then “s3sync”:http://s3sync.net/wiki to copy all the data on to my “amazon s3”:http://aws.amazon.com/s3/ slot.

Daily my trusty cron job would send me an email telling me that all is good and backups are working.

A few days ago everything just stopped working, these kind of things always tend to happen in the most inconvenient time. Turns out I have a dead power supply. Usually, this is not a really big deal, but my server is 6 years old and the power supply it had was a custom piece of hardware which probably can not be purchased anymore.

I was super lucky to have had a proper backup strategy. I decided its time to scrap the old server and move to something new. Because I had backups this was feasible.

So, all should be up and running again. Building the new server image gave me a great opportunity to clean up a bunch of stuff.

h2. Lessons learned

  • If you are going to have your own server, you better have some sort of backup strategy that is reliable.
  • Backup more data than you think you need: it would have been handy to have my bind records in text format, instead of having to recreate them. My apache config would have been handy as well.
  • Test your backup system regularly. It takes time, but your server could get hacked or explode any day.
  • Incremental backups are really risky, I was lucky to have all the pieces but it too about half an hour to reconstruct my backup from the pieces out there. If you are going incremental, have a full backup slotted as well on a regular basis.
  • I think I prefer the VPS solution, since my data is now virtual I can download my server to my computer and play around with it. Snapshotting virtual images gives you a much simpler backup solution.

Slow response-time is usabilitie's alcoholism

$
0
0

A fundamental pillar of usability is response-time. To quote some “fascinating research”:http://www.google.com.au/search?hl=en&amp;client=firefox-a&amp;rls=org.mozilla:en-US:official&amp;hs=BYY&amp;sa=X&amp;oi=spell&amp;resnum=0&amp;ct=result&amp;cd=1&amp;q=response+time+in+man+computer+conversational+transactions&amp;spell=1 from the sixties by my hero Robert B. Miller.

bq. “[Regarding] response to request for next page. […] Delays of longer than 1 second will seem intrusive on the continuity of thought.”

bq. “Assume an inquiry of any kind has been made. The user – and his attention – is captive to the terminal until he receives a response. If he is a busy man, captivity of more than 15 seconds, even for information essential to him, may be more than an annoyance and disruption. It can readily become a demoralizer – that is a, reducer of work pace and of motivation to work.”

Lawrence Lessig uses a fantastic analogy when he talks about the “change congress”:http://change-congress.org/ movement. He argues that the problem the congress is facing is similar to that of an alcoholic. Before you can address any of the enormous amount of social and economical issues the alcholic faces you first have to address the alchohlism. It’s not the only problem, but it is the first problem.

Similarly, slow response-time is usabilitie’s alcoholism. A slow computer interface is demoralizing no matter how many flashy colors, pretty graphics and next generation workflows it has.

Everyone in today’s Web 2.0 ajax filled web applications should know “these axioms”:http://www.useit.com/papers/responsetime.html .

Next time your manager asks you for another feature that will slow down your UI (or refuses you time to rectify your slow UI), it is your duty to to point them back at these truths.

Simpler debugging of Vista Media Center plugins

$
0
0

One thing that always bothered me is the workflow when debugging media center plugins.

I’m used to pressing F5 and having my debugger automatically attach to my process. This is not the way it works for media center plugins.

To launch a media center plugin you have to execute ehshell.exe, and to debug these plugins you need to attach to exexthost.exe, a different process.

So, for example, our debug option for media browser says:

  • Start external program: C:\Windows\ehome\ehshell.exe

  • Command line arguments: /entrypoint:{CE32C570-4BEC-4aeb-AD1D-CF47B91DE0B2}{FC9ABCCC-36CB-47ac-8BAB-03E8EF5F6F22}

This configuration forces media center to launch our media browser plugin on start-up.

But… it only attaches to ehshell.exe forcing us a second, very annoying step which involves hunting down ehexthost.exe in the process list and attaching to it. It’s such a waste of time.

So … I wrote a little visual studio macro to take care of this shoddy work flow:

 
Public Sub CompileRunAndAttachToEhExtHost()

    DTE.Solution.SolutionBuild.Build(True)
    DTE.Solution.SolutionBuild.Debug()

    Dim trd As System.Threading.Thread = _
        New System.Threading.Thread(AddressOf AttachToEhExtHost)
    trd.Start()

End Sub

Public Sub AttachToEhExtHost()
    Dim i As Integer = 0
    Do Until i = 50
        i = i + 1
        Try

            For Each proc In DTE.Debugger.LocalProcesses
                If (proc.Name.IndexOf("ehexthost.exe") <> -1) Then
                    proc.Attach()
                    Exit Sub
                End If
            Next
        Catch e As Exception
            ' dont care - stuff may be busy 
        End Try
        Threading.Thread.Sleep(100)
    Loop
End Sub

To install it:

  • Tools->Macros->Macros IDE…

  • Expand MyMacros

  • Paste the code into yor macro file

Next, bind your macro to a shortcut key

  • Tools->Options->Keyboard

  • Click on “Press shotcut keys:”

  • Press the shortcut you want (I use CTRL-SHIFT-ALT-A)

  • Start a search in the “show commands containing… ” box. Search for “CompileRunAndAttachToEhExtHost”

  • Click Assign

Your done …

Now to Compile->Launch and Debug your media center plugin all you need to do is click CTRL-SHIFT-ALT-A

Refactoring media browser - entity resolution

$
0
0

Media browser has a bunch Entities that it deals with, for example: Movies, Episodes and Shows. We scan the file system and figure out what files and folders map to what entities.

This is done using entity resolution.

I have defined a new set of classes that take in a location and spit out a factory that knows how to create an entity. Sounds a bit tricky, but this trickiness gives us tons of flexibility. The beauty of this system is that code now lives in a very logical place. If a user is trying to figure out why a particular file is not being detected as a Movie we know to look at the MovieResolver which contains all the logic for movie resolution. Additionally, this architecture is very plug-in friendly and incredibly testable.

For example, I just fixed up a bug where folders were not being detected as movies properly.

Writing the fix was really easy, first I started with this test case:

[Test]
public void TestRecusiveMovieResolution()
{
    var resolver =  resolver = new ChainedEntityResolver() { 
        new MovieResolver(2, true), // maximum 2 videos per movie, allow for recursive search
        new FolderResolver()
    };
    var movieFolder = MockFolderMediaLocation.CreateMockLocation(@"
|Rushmore
 |part 1
  a.avi
 |part 2
  b.avi
");

    Assert.AreEqual(resolver.ResolveType(movieFolder), typeof(Movie));

}

Some observations:

  • I wrote an awesome little mock filesystem that allows us to test our algorithms without creating files.
  • We do chained resolution, meaning that if a resolver at the top of the chain resolves the type of a location we stop and return.

Next comes the fix in MovieResolver.cs:


  videoCount += childFolders
     .Select(child => ChildVideos(child))
     .SelectMany(x => x)
     .Take((maxVideosPerMovie - videoCount) + 1)
     .Count();

and


private IEnumerable ChildVideos(IFolderMediaLocation location) {
    foreach (var child in location.Children) {
        if (child.IsVideo()) { 
            yield return child;
        }
        var folder = child as IFolderMediaLocation;
        if (folder != null) {
            foreach (var grandChild in ChildVideos(folder)) {
                yield return grandChild;  
            } 
        }
    }
    
}

Observations:

  • SelectMany is a really useful LINQ method, that allows you to lazily chain your enumerables.

Media Browser source code license changes

$
0
0

Ever since Media Browser was Video Browser its source was licensed under the “GPL v3”:http://gplv3.fsf.org/ license. This meant that no one could use any of the code we wrote in our application without publishing their source code.

This, in my humble opinion, is good.

It means no one can take Media Browser, rename it, package it up and sell it. It also means that we have the right to look at the source code of any projects that are based off a branch of Media Browser. So, for example, “Music Browser”:http://code.google.com/p/music-browser/ has to remain open source under the GPL.

However, the GPL has its problems, the biggest being that it restricts code reuse. Companies usually shy away from looking at any code that is GPL, because of its viral nature. They do not want to end up having to release all their source code just because they used a little GPL class.

I have spent the last few weeks refactoring Media Browser, I defined a bunch of re-usable sub-systems, a logger, a mini persistence framework, an efficient filesystem scanner which is much faster than the .Net one and various helper classes. These components are fairly portable and I would like to be able to use this code in future contracts and projects. I wrote most of this code so of course I can license it however I want. But what happens when people submit fixes? If these classes are GPL the fixes are GPL so I can not re-license the fixes. And then I’m stuck with a ton of code I can not re-use.

So, Media Browser licensing has now changed somewhat. All the unit tests are licensed under the dual MIT and GPL license. All the files below MediaBrowser/Library are licensed under the dual MIT and GPL license.

This means you can grab my “interceptor”:http://code.google.com/p/videobrowser/source/browse/trunk/MediaBrowser/Library/Extensions/Interceptor.cs of my “LINQ extensions”:http://code.google.com/p/videobrowser/source/browse/trunk/MediaBrowser/Library/Extensions/DistinctExtensions.cs and use them in commercial projects. All you have to do is follow the “MIT license guidelines”:http://www.opensource.org/licenses/mit-license.php .

Further down the line I intend to split off the MediaBrowser/Library directory into its own DLL and perhaps split the project up into the 2 pieces.

At a high level out UI remains GPL (and all the media center specific hacks) but the engine that drives the UI is now licensed under the much more permissive MIT license.

Upcoming features for Media Browser

$
0
0

I have spent a fair bit of time refactoring the Media Browser code base, the idea has been to make the product a lot more extensible and maintainable. Lots of tests were added and lots of stability and performance fixes were applied.

The new code base is VERY different, so different I have been contemplating not calling the next version 2.0.12 and instead going with version 2.1 or 3.0.

The list of fixes is so long it probably will not fit in a single blog post. I will leave the list of fixes to the release notes. But I would like to note that overall, MB is now faster, more stable and easier to debug (both in production and dev)

But this post is not really about that, instead I would like to give some people a taste of the features to come:

h2. New plugin architecture:

!http://farm4.static.flickr.com/3579/3512323742_1e3cc39d08.jpg?v=0!

We now have a way for people to extend Media Browser without joining the Dev Team. The extendability allows you to extend our object model, add new entities, define new types of media and add items to the root menu.

h2. DVR-MS support

!http://farm4.static.flickr.com/3355/3512323074_f380716528.jpg?v=0!

I have two media centers, I tape stuff downstairs and watch stuff upstairs. To date DVR-MS support in MB has been a little woeful. No pretty metadata and horrendous filenames. The brand spanking new DVR-MS plugin solves this issue.

h2. Multiple backdrops

!http://farm4.static.flickr.com/3310/3512323198_bf1682d4a1.jpg?v=0!

!http://farm4.static.flickr.com/3391/3511514279_ebffe78f8c.jpg?v=0!

This one has a real high WAF, we have nice transition effect that cycles between multiple backdrops. The backdrops can either be local or we can fetch them from themoviedb.

h2. Podcasts

!http://farm4.static.flickr.com/3585/3512322976_ef47913c49.jpg?v=0!

I love this feature. The configurator now allows you to add whatever podcasts you want to MB and we will go ahead and stream them for you.

h2. Play All and Random

!http://farm4.static.flickr.com/3539/3511513771_75c963994d.jpg?v=0!

Select a folder, hit play and little window will pop up that will allow you to either play all the movies in the folder (recursively) or play the randomly. Really slick.

h2. Whats new in this folder?

!http://farm4.static.flickr.com/3349/3512323672_a4ff5fb213.jpg?v=0!

Usually when you sit down to use your HTPC you want to look through the newest videos in your collection. Well, in detail view we will display a list of all the newest children on the right hand side. But wait, theres more, this list is clickable.

h2. Global indexing

!http://farm4.static.flickr.com/3388/3512323630_99537dd4e7.jpg?v=0!

If you click on an actor, or index by actor the index will look through the whole collection. Not only at the items in the folder directly below. This allows you to see what TV shows your favorite actors are in.

h2. ITunes HD Trailers

!http://farm4.static.flickr.com/3350/3511513675_b7ac68ebea.jpg?v=0!

I wrote this puppy today. Pretty ITunes HD trailers with full metadata. Take that front-row. ITunes trailers is a plug-in, getting the videos to stream requires a bit of Codec-Foo we will have to have a wiki page on this.

h2. Share your settings

You can now place your actor images on a network share, you can place your watched/unwatched status on a network share and you can place your display preferences on a network share. You don’t need to install a fancy shmancy database. It just works. Getting it to work was no easy task.

h2. There is more

There are some additional goodies I have not told you about. So hold tight, the next MB is shaping out to be a fantastic release. I’m using it at home on all my media centers.

h2. When will it be released?

When its ready, there are still some bugs to crush and features to polish. Hold tight, I hope it won’t take too long. Keep in mind, I am going away in a week (for 3 weeks) so if we are not ready by the end of next week you are going to have to wait a month.

Media Browser repository is now on Github

$
0
0

I just created a “clone of the Media Browser repository”:http://github.com/sambo99/Media-Browser/tree/master on Github.

h2. Why does this matter?

It’s not only that git is what all the cool kids are using. As a developer working with git is a much more pleasurable experience, with subversion there is a lot of waiting around each time you commit. Subversion encourages big fat check-ins, cause you tend to only check-in stuff once a day and you can not stage commits. Branching is cheap but merging, even with the latest subversion is still lagging behind git.

Anyway, you can read all about “why git is better than x”:http://whygitisbetterthanx.com/.

It’s a myth that there are no tools for Windows. “MsysGit”:http://code.google.com/p/msysgit/ gives you the two most important gui tools. “gitk” and “git gui”.

Now that our main repository is on github, ANYONE can fork Media Browser and add whatever features they want. Github makes it really easy to do so. Just create an account on github. We can then cherry pick which features/bugs we want to pull back into trunk.

So what are you waiting for “read up about git”:http://stackoverflow.com/questions/315911/git-for-beginners-the-definitive-practical-guide , it will change your whole approach to source control.

Will program for food, the future of Media Browser

$
0
0

!http://www.mediabrowser.tv/plugins/hungry_coder.jpg!

Media Browser has cost me a lot of time. According “ohloh”:http://www.ohloh.net/p/videobrowser Media Browsers cost to date has been close to half a million dollars.

In the last few months Jas and I spent quite a lot of time working on Media Browser. My main selfish goal has been to make the source code resume-worthy. Media Browser is now on my resume. My secondary selfish goal was to add a few features/architecture I really wanted. I also achieved that goal.

Yesterday I posted a question to the Joel-on-Software “The Business of Software” forums asking them if there is any way I can make a business out of MB. I got “tons of very thoughtful”:http://discuss.joelonsoftware.com/default.asp?biz.5.761750.11 replies.

I would like to divide the replies into a few camps:

h3. The “you are screwed” camp

They say: – Transitioning open source software to a viable business model is REALLY hard, perhaps impossible.

h3. The “software should not be free” camp

They say: – You are producing something useful, charge for it! Stop giving it away for free.

h3. The “create a pro version” camp

They say: – Ship one “free” open source product and a second paid “professional” edition. With extra features. “Wine and Crossover”:http://www.codeweavers.com/products/differences/ do this.

h3. The “charge for other stuff” camp

They say: – Charge for support. – Charge for customisation.

h2. Where do I stand?

I agree, I’m screwed, I like free-as-in-speech software I like the hackability, the community, the source contributions and the freedom. I wrote a lot of source code for lots companies, the vast majority of the software I wrote, I can no longer use. This really sucks, cause I spent lots of time writing reusable frameworks which I can no longer reuse or improve. I am not an “open source zealot”:http://en.wikipedia.org/wiki/Richard_Stallman but I have a soft spot for open-source.

I am not going to close the source. Even if I could get this past the other devs, this would be stabbing the community in the back, something that I will not stand for. We get great value out of having our source open. We get better testing and users can figure out stuff by themselves. We get patch contributions. Besides, the source is open today, it can not be unopened, someone can take the code today and fork it and start their own MB clone. Something that is likely to happen if we closed the source.

I am not going to create a pro version. By creating a pro edition, we would become our biggest competitors. I am also struggling to think of anything compelling I could offer beyond what there is today that a pro version could offer. Having a pro edition would also encourage forking.

I would like to start charging for support, customisation and ads. I would like to add a donate button somewhere. But this leaves a big open question:

h2. Who gets the benjamins?

Traditionally, in open source projects, the money goes to the projects running costs. Fortunately our running costs have been really low. Google hosts our downloads and a friend donated the slice MBs website is running on.

We are going to have to have some internal discussions about this. But the way I see it, only the BIG contributors are entitled to a slice of the cake.

h2. Where does it leave me?

I will not work full-time for free. Forget about it. Once we are done stabilizing the current version of MB I am going to take some time off MB and look into other opportunities that have some chance of making money. If we can figure out some way in which MB donations, support fees, customisation and ads pay for my salary I will be happy to continue working on MB, even full-time.

What do YOU think I should do? Where do you see the future of Media Browser?

EDIT I have made up my mind on what to do … “read about it here”:http://www.samsaffron.com/archive/2009/06/29/My+new+startup+Media+Browser


Our new startup, Media Browser

$
0
0

Free Beer Here

I have spent a lot of time thinking about Media Browser. I would like to thank the community for all of your help and great advice. I would like to thank the developers for all their patience and trust.

I am very proud and happy to announce that I am forming a new company that will make sure Media Browser development continues.

We will also be charging money for the previously free-as-in-beer Media Browser. Which brings me to the Q&A section …

h3. So, you may ask, why should you be happy about this?

  • The company will be able to better support the community and continue to evolve the product.
  • I will be able to continue working on Media Browser.
  • We will be able to compensate the contributors.
  • We will be able to buy equipment and perform more extensive testing.
  • We will be able to form alliances with other companies and perhaps, who knows, give you a legal way to get things like imdb metadata.

h3. How much is Media Browser going to cost?

We are still working out all the details. My current preference is a yearly fee of sorts. Something very affordable. I would prefer not to go down the path of a one time fee for the life of the product, I don’t want to enter a support nightmare where we are stuck supporting legacy products. I don’t want people to purchase a license only to discover they need to buy another one in 3 months, in order to get the latest and greatest.

A yearly fee is simple and sustainable.

h3. What are you paying for?

The short answer is, you are paying money so Media Browser does not die. Support will be better but not perfect, at the start my work load will be pretty high so I would actually expect support to get a bit worse, while I make it sustainable. This mean I will need to focus on a usable bug tracking system, cleaning up documentation and lots of administrative stuff.

h3. Does this mean we will be focusing on implementing lots of new features?

Stability, ease of use and a very low bug count are our top priority. There will be very few new features in the upcoming months, instead we will focus polishing up our current release and ensuring that all our current features work as expected.

h3. What about about the source code, does it remain open source?

I am hoping to keep the majority of Media Browser open source. Media Browser’s hackability has been a great source of strength. I really appreciate all the hard work that was so graciously contributed by the community.

h3. What about a trial version?

We are still working out all the details on that, this is something I would really like some feedback on, do we need a trial version?

h3. Why not go with donations?

There are lots of reasons, the main being I think Media Browser has a value which is more than $0. A lot of time was invested in it, something our users fully appreciate. Donations are not fair, it means the general population receive free software because a bunch of philanthropists decide to donate. Also, people donate a lot less than you would expect. If we went down the donation route and later decided to change to a paid model, we would have a logistic nightmare.

h3. What about advertising?

There will be no ads built in to Media Browser to help fund it, I always championed usability and this would be a big loss on that front. I will consider ads on the mediabrowser.tv site, which is great for complimentary income, but there is no chance it will cover our costs.

h3. How can you help?

We are are looking to hire a designer on a short stint to help us revamp the mediabrowser.tv website, if you know someone, contact me.

Be patient, it is going to be a little slow for the next few weeks while this stuff is being set up. Once all the administrative tasks are completed, I will be focused on shipping a stable release of Media Browser.

The bottom line is, that selling Media Browser is the only feasible way for it to continue evolving, its either this, or I quit working on Media Browser and find another project.

Behaviour driven design using rspec IronRuby and C#

$
0
0

h3. Get a copy of IronRuby

You can download the latest IronRuby release from: “http://www.ironruby.net/Download&rdquo;:http://www.ironruby.net/Download

h3. Get yourself a mini IronRuby command line environment set up

  • Decompress the Zip file somewhere
  • Create batch files to start a command prompt with all the IronRuby paths. Place them in the bin directory (where iirb.exe lives):

bin/ruby-prompt.bat

%comspec% /k %cd%\settings.bat

bin/settings.bat

set PATH=%CD%;%cd%\..\lib\IronRuby\gems\1.8\bin;%PATH%

Great! Now when you click on ruby-prompt.bat you will get a command prompt that can exec iirb, igem and the rest of the IronRuby commands.

h3. Get rspec

(in your IronRuby command prompt run)

igem install rspec

h3. Write your tests

In this example I am testing an “LRU Cache”:http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used

require File.dirname(__FILE__) + '\..\LRUCache\bin\Debug\LRUCache.dll'
include System::Collections::Generic
include System


# mostly by the sadly missing _why 
class Object
  def metaclass; class << self; self; end; end
  def meta_eval &blk; metaclass.instance_eval &blk; end

  def meta_alias(new, old) 
    meta_eval { alias_method new, old}
  end
  
  def meta_def(name, &blk)
    meta_eval { define_method name, &blk }
  end

end

describe "LRUCache" do 
  
  def create_cache(capacity)
    cache = LRUCache::LRUCache.of(System::String,System::String).new(capacity)
    [
      [:try_get, :TryGetValue], 
      [:contains_key?, :ContainsKey], 
      [:add, :Add],
      [:remove, :Remove],
      [:count, :Count]
    ].each { |new,old| cache.meta_alias(new, old) }
    
    cache.meta_def(:replay) do |array|
      array.each do |key,val| 
        if val.nil?; cache[key]; else; cache[key] = val; end  
      end
      cache
    end
    
    cache
  end

  it "should never exceed its capacity" do 
    cache = create_cache(10) 
    (11).times { |i|
      cache[i.to_s] = "data" 
    }
    cache.count.should == 10
  end

  it "should throw an exception if an item is accessed via the index "\
    "and its not there" do 
    cache = create_cache(10)
    lambda { cache["bla"] }.should raise_error(KeyNotFoundException)
  end

  it "should expire stuff that was not recently used, when capacity is reached" do 
    cache = create_cache(3)
    cache.replay(
      [ 
        ["a","aa"],
        ["b","bb"],
        ["c","cc"],
        "a",
        "b",
        ["d","dd"]
      ]
    )
    
    cache.contains_key?("c").should == false
    ["a","b","d"].each{|key| cache.contains_key?(key).should be_true }
  end

  it "should increase the count when stuff is added" do 
    cache = create_cache(3)
    lambda { cache.add("a","aa") }.should change(cache, :count).by(1) 
  end

  it "should decrease the count when stuff is removed" do 
    cache = create_cache(3) 
    cache.add("a", "aa") 
    lambda { cache.remove("a") }.should change(cache, :count).by(-1) 
  end

  it "should throw if a cache is initialized with 0 capacity" do 
    lambda { create_cache(0) }.should raise_error(ArgumentException) 
  end

  it "should allow us to enumerate through the items" do 
    input = [["a","aa"],["b","bb"], ["c","cc"]]
    
    cache = create_cache(3).replay(input)
    data = [] 
    cache.each do |pair|
      data << [pair.Key, pair.Value]
    end
    data.should == input 
  end

  describe "(try get)" do 
    before :each do 
      @cache = create_cache(3) 
    end

    it "should support missing items" do 
      found, value = @cache.try_get("a")
      found.should == false
    end
    
    it "should support existing items" do 
      @cache["a"] = "aa"
      found, value = @cache.try_get("a")
      found.should == true
      value.should == "aa" 
    end
  end
end

h3. Write the C# classes to power the LRUCache

IndexedLinkedList.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace LRUCache {
    public class IndexedLinkedList {

        LinkedList data = new LinkedList();
        Dictionary> index = new Dictionary>();

        public void Add(T value) {
            index[value] = data.AddLast(value);
        }

        public void RemoveFirst() {
            index.Remove(data.First.Value);
            data.RemoveFirst();
        }

        public void Remove(T value) {
            LinkedListNode node;
            if (index.TryGetValue(value, out node)) {
                data.Remove(node);
                index.Remove(value);
            }
        }

        public int Count {
            get {
                return data.Count;
            }
        }

        public void Clear() {
            data.Clear();
            index.Clear();
        }

        public T First {
            get {
                return data.First.Value;
            }
        }
    }
}

LRUCache.cs


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace LRUCache {
    public class LRUCache : IDictionary {

        Dictionary data;
        IndexedLinkedList lruList = new IndexedLinkedList();
        ICollection> dataAsCollection;
        int capacity;

        public LRUCache(int capacity) {

            if (capacity <= 0) {
                throw new ArgumentException("capacity should always be bigger than 0");
            }

            data = new Dictionary(capacity);
            dataAsCollection = data;
            this.capacity = capacity;
        }

        public void Add(TKey key, TValue value) {
            if (!ContainsKey(key)) {
                this[key] = value;
            } else {
                throw new ArgumentException("An attempt was made to insert a duplicate key in the cache.");
            }
        }

        public bool ContainsKey(TKey key) {
            return data.ContainsKey(key);
        }

        public ICollection Keys {
            get {
                return data.Keys;
            }
        }

        public bool Remove(TKey key) {
            bool existed = data.Remove(key);
            lruList.Remove(key);
            return existed;
        }

        public bool TryGetValue(TKey key, out TValue value) {
            return data.TryGetValue(key, out value);
        }

        public ICollection Values {
            get { return data.Values; }
        }

        public TValue this[TKey key] {
            get {
                var value = data[key];
                lruList.Remove(key);
                lruList.Add(key);
                return value;
            }
            set {
                data[key] = value;
                lruList.Remove(key);
                lruList.Add(key);

                if (data.Count > capacity) {
                    Remove(lruList.First);
                    lruList.RemoveFirst();
                }
            }
        }

        public void Add(KeyValuePair item) {
            Add(item.Key, item.Value);
        }

        public void Clear() {
            data.Clear();
            lruList.Clear();
        }

        public bool Contains(KeyValuePair item) {
            return dataAsCollection.Contains(item);
        }

        public void CopyTo(KeyValuePair[] array, int arrayIndex) {
            dataAsCollection.CopyTo(array, arrayIndex);
        }

        public int Count {
            get { return data.Count; }
        }

        public bool IsReadOnly {
            get { return false; }
        }

        public bool Remove(KeyValuePair item) {

            bool removed = dataAsCollection.Remove(item);
            if (removed) {
                lruList.Remove(item.Key);
            }
            return removed;
        }


        public IEnumerator> GetEnumerator() {
            return dataAsCollection.GetEnumerator();
        }


        System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() {
            return ((System.Collections.IEnumerable)data).GetEnumerator();
        }

    }
}

h3. Run your tests:

C:\Users\sam\Desktop\Source\LRUCache\spec>spec lru_spec.rb
.........

Finished in 0.3340191 seconds

9 examples, 0 failures

Yay, we have a working rspec test suite.

I know, there is a lot to chew on here, in future posts I will try to explain some of the trickery that is going on.

Brand spanking new Media Browser Blog

$
0
0

We just started a new “group blog”:http://community.mediabrowser.tv/topics?category_id=4&amp;order_by=published for Media Browser.

My intention is to post most Media Browser related posts on the blog. To subscribe use: the following “rss feed”:http://community.mediabrowser.tv/topics.rss?category_id=4.

I still need to make a few more changes to the blog to help people reach it. I am going to set up a nicer url and easier to find rss link. Also, I am undecided on whether to allow anonymous comments or not.

Anyway, plenty to read on the new blog. So, what are you waiting for, go “check it out”:http://community.mediabrowser.tv/topics?category_id=4&amp;order_by=published.

Got Flare?

$
0
0

h3. Community Tracker

I spent a lot of time in the last couple of months working on a new project I like to call “community tracker”:http://community.mediabrowser.tv. When running projects you need some way of tracking defects and feature requests. If you do not have a way of doing so you drown pretty fast. There are probably thousands of bug trackers out there, all with their own strengths and weaknesses. Some are free and some are hellishly expensive. Nonetheless, every developer needs a bug tracker.

When I started Media Browser I used google’s free bug tracker on google code. It functioned alright but was not that enjoyable to use.

Meanwhile, in another universe, I spent a fair bit of time on “stackoverflow”:http://stackoverflow.com/users/17174/sam-saffron. Too much time if you ask me.

So, I thought to myself, why not kill to birds using a single stone. Make it fun to track defects and migrate my addiction to a new site.

h3. Can tracking product defects be fun?

There is something “fun” about stackoverflow. You do stuff on the site, if your behavior is popular you are given “reputation”, positive reinforcement makes this addictive really fast.

I decided to take a classic bug tracker design, mix in a bit of stackoverflow and “uservoice”:http://uservoice.com/ and see what pops out at the other end.

h3. Why bother?

Does your bug tracking system come with flare?

!/images/posts/flare/flare.png!

Hmm, I mean this kind of flare:

!/images/posts/flare/sam_flare.png!

  • When I fix a bug on community tracker I get the “Bug Slayer” badge and a 15 point bonus.
  • When I complete a feature request I get the “Dream Maker” badge and a 15 point bonus.
  • If someone figures out how to reproduce a bug they get the “Detective” badge and 15 point bonus.
  • If someone submits a bug that gets fixed they get the “Reporter” badge and a 15 point bonus.
  • And so on …

Through tagging, achievements, reputation and a healthy community, tracking bugs and features has become fun.

The results are very promising, in the last couple of months we had

  • 42 fixed defects
  • 8 feature requests completed
  • 24 questions resolved

Its early days for my community tracker project. Nonetheless, I believe it is taking Media Browser to a much more mature level. We are communicating better with our users.

I hope some day soon to have more projects using the community tracker engine.

Do you find the “Media Browser community tracker”:http://community.mediabrowser.tv enjoyable?

Diagnosing runaway CPU in a .Net production application

$
0
0

So, you have this .Net app in production. Somewhere someone made some sort of mistake and it appears the CPU is pegged for large periods of time.

…and you ask yourself, how can I debug this:

  • There is no copy of Visual Studio installed.
  • There is a strict no-installer policy on these machines.
  • Performance is already messed up and you do not want to make stuff worse by diagnosing it.

To date the only answer I am aware of is magic voodoo art involving windbg and the sos extensions.

Sure you can run [process explorer]:(http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) and isolate the evil thread:

image

But … you have no idea how that thread relates to your .Net application or where that evil thread is running.

Enter my cpu-analyzer tool.

Here is a quick demo:

static void MakeBadStuffHappen() {
    ThreadPool.QueueUserWorkItem(_ => { MisterEvil(); });
}

static void MisterEvil() {
    double d = double.MaxValue;
    while (true) {
        d = Math.Sqrt(d);
        if (d < 1.1) {
            d = double.MaxValue;
        }
    }
}

static void Main(string[] args) {
    MakeBadStuffHappen();
    Console.WriteLine("Hello world!");
    Console.ReadKey();
}

static void MakeBadStuffHappen() {
    ThreadPool.QueueUserWorkItem(_ => { MisterEvil(); });
}

static void MisterEvil() {
    double d = double.MaxValue;
    while (true) {
        d = Math.Sqrt(d);
        if (d < 1.1) {
            d = double.MaxValue;
        }
    }
}

static void Main(string[] args) {
    MakeBadStuffHappen();
    Console.WriteLine("Hello world!");
    Console.ReadKey();
}

We run:

cpu-analyzer.exe evilapp
------------------------------------
ThreadId: 4948
Kernel: 0 User: 89856576
EvilApp.Program.MisterEvil
EvilApp.Program.b__0
System.Threading.ExecutionContext.Run
System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal
System.Threading._ThreadPoolWaitCallback.PerformWaitCallback
... more lines omitted ... 

Aha, the method called MisterEvil is responsible for 9 seconds in user mode.

Of course this trivial sample is kind of boring, but once you apply this tool to bigger and more complex applications it can be a life saver.

…and did I mention, no installer is required.

You can download a demo that works in .Net 2.0 and have a play. Hope you find it helpful. Of course no warranties are provided, and its not my fault if it crashes your app.

Update Here is a .Net 4.0 version.

Viewing all 150 articles
Browse latest View live