In Computer Science in St Andrews following an email from Professor Kevin Hammond in May 2011 on reviewing and the reviewing process we had a very fruitful discussion that I'd like to summarise for myself and my students here. Kevin is presenting on this topic, during the 2011 SICSA PhD conference in Edinburgh in May 2011. I'm going to continue to edit this page as I develop new insights or as ideas and pointers present themselves.
Given the nature of Computer Science, our conferences are a very important pathway for disseminating the results of our work. Other people, perhaps you, might disagree but this is how I see the field and I know it's how many others do too. As a result, conferences are very competitive with sub 20% acceptance rates being common. Written reviews, from independent reviewers, determine what is accepted for publication at a conference or in a Journal and "in the long run reviews have an impact upon other people's professional advancement and careers, and upon progress in the field." 
As I understand it, this is different than in other areas of the humanities and science where conference acceptance rates can be very high and "reviewing" conference papers before acceptance is an alien concept. As a result, many fields don't see papers in peer-reviewed conferences as high-quality scholarly articles. Whereas, in computer science, papers in peer-reviewed conferences are accepted as high-quality scholarly articles.
It's important here to draw a distinction between conference paper reviewing (which is often a "one shot" and affords the authors little opportunity to refute/address or improve the paper based on the reviews) and journal paper reviewing. The Journal review process is just that, a process, a rejection with reviews can be used to improve and address the comments (factual or otherwise) and the paper can be resubmitted. Indeed, the question of "Choosing a venue: conference or journal?" has been considered by others and is important to consider often and early.
The central questions as I see them are "how do I write a good review" and also "how do I deal with a review I feel is unfair". Many reviewers put in thankless hours of work in writing detailed and helpful reviews. It's our job as academics to accept and use these reviews to make our work better. We shouldn't just ignore reviews (negative or positive). Many of us have seen the same paper resubmitted to another venue without any of the review comments being addressed. This is frustrating for reviewers and turns the review process, which should improve the work, into a game. I feel we collectively need to push against this (regardless of how unfair/unjust/incorrect/inappropriate we feel the reviews received might be). A unfair review can be addressed in a rebuttal process (such as in CHI) or in a new submission or by making all reviews public. This is of course an area of active discussion. For example, as Mirco noted the ACM have a document on Rights and Responsibilities in ACM Publishing which looks at the area from a range of viewpoints from author to reviewer to conference chairs and Journal editors.
Whatever the case, writing a good review is an important skill and here are the thoughts and reflections of others on this topic.
- Reading a computer science research paper (PDF) Computer Science
- How to review a systems paper (PDF) Computer Science
- How NOT to review a paper (PDF) Computer Science
- The task of the referee (PDF) Computer Science
- The Task of the Referee (ACM paper) (PDF) Computer Science
- How to review a paper (full text html) Physiology (PDF download)
- How I review an original scientific Article (full text html) Respiratory
I'm going to reflect further on this and extend my own thoughts on this here in due course.
For now, take writing a review as a serious matter, give yourself enough time to read and reflect on the paper(s). I often read the papers quickly when I first get them then a few days/weeks later in more detail. This two phase process is really helpful in keeping my comments and thoughts in scope to the conference/journal. It's also important to help me determine comments on the research done rather than the research I wish they had done. These are not the same thing and I think a lot of people forget this in their role as a reviewer. Saleem suggest that we should always "include references to related work" in reviews as appropriate, instead of simply alluding to "past work" or "this has been done". It's important to learn to appreciate the reviews we get (and acknowledge them as appropriate) and not simply dismiss them.
When receiving a review, read it. If your work is accepted, consider the reviews a way to make your paper that much better in the final version so people read, use and cite it! (getting it published is only the first step, making it have impact is another job). If rejected, read the reviews. If there is no rebuttal, wait a few days, then read the reviews again (with your co-authors) and discuss what you can take from the reviews to improve the paper. Break the review down into aspects of style, substance, new work, errors etc. Then decide how to use the review to your benefit. One poor review out of three can be frustrating but three out of three suggests you might be doing something fundamentally wrong in the presentation or experiments etc. Of course this isn't always the case, and there are famous examples of ground breaking papers being rejected as evidence that peer-review doesn't work. We must acknowledge peer-review is a human activity and as such is inherently fallible.
In summary, if asked to write a review, if you are too busy say no but if you say yes take the time to do it well. When you get reviews, read them, use them and learn what from them what you find useful or things you wish to avoid in your own reviews.