Exchange 2007 Beta – aftermath

Saturday, 23 September 2006

As I mentioned some days ago, I evaluated Microsoft Exchange 2007.

Boy, that was fun. I defined roles, I copied mailboxes, I flooded it with spam, I cleaned them up with retention rules, I accessed network shares via webaccess.
In the end the test was spread across 4 servers, 3 of them being virtual on the main testing machine. 2 for mailbox storage, 1 for client access and 1 for hub transport.
To tie up some loose ends I wrote about in my previous posting, well:

  • I wasn’t able to fine tune the webaccess component, hopefully we get that into the final
  • SMTP servers can be setup as desired (portwise, etc), I just didn’t look well enough.
  • Message tracking is a joke, I was SO right about that one.

So, what is my impression? Well, it’s already decided, we will adopt Exchange 2007, as soon as it’s out and reported ready  for full business use.

What? You think you noticed that my testing will come to an end?
Well, you’re right, testing of Exchange 2007 will temporarily be shut down, but only to rise from the ashes in my new enviroment, where I will test a migration scenario that is as closely modeled to the real word as possible.

I will be using 2 Supermicro servers, both equipped with 2 dual Core 64Bit HT Xeons, that makes 8 virtual CPUs per machine, and 12 GB RAM each, connected to 2 Promise M500i SAN enclosures stuffed with 500GB SATAII drives.
All fully virtualized to provide a peak of 10 virtual servers, having like 2 gig RAM each, able to access about 5 terrabyte disk space.
I guess that will be enough to simulate a few Server 2003 R2 DCs, some Exchange 2003 & Exchange 2007 Boxes 🙂


Evaluating Microsoft Exchange Server 2007

Wednesday, 6 September 2006

Yesterday I got my hands onto the BETA Verison of Microsoft Exchange Server 2007. What a ride.

I started by deploying the HUGE package (like 1.2 GB) on our testing environment, a dedicated LAN with a working Exchange 2003 SP2 infrastructure.
The hardware I’ve used is a Dual Xeon 2.8 Ghz machine, 4 GB of ram running Windows Server 2003 R2 x64 Edition.
Installation itself went without any glitches, I used the standard scheme, installing the following roles: Mailbox, Client Access and Hub Transport.

The installer created a new administrative group, and a connector to the the Exchange 2003 infrastructure.
That connector is the only thing that the 2 platforms have in common, everything else has to be reimplementen, redesigned or generally adapted to.
Few examples are recipient policies, offline adress books (if you want them at a http distribution point), accepted maildomains etc etc.

After this I moved the existing mailboxes from the Exchange 2003 Server to the Exchange 2007, just to see how the new Webaccess works.
That was when I really started to get impressed, it’s much quicker now than ever before, the calendar pops up very fast and all looks stable and clean.
Think they really have some clever pre-caching/Ajax/Atlas routines here.

Later, however, I tried to dig deeper into the configuration, and I really was unable to find out how to do some basic operations, which makes me wonder if the real “hands in the guts” configuration of the server is only possible via the new MSH.
So, just for you information, here are some things that are plain obvious in Exchange 2003, but seem unarchievable to me in Exchange 2007 via the new admin console:

  • Setting up another, feature less  (hostheader enabled) Website for Webaccess to minimize attack surface
  • Fine tune how the SMTP server component behaves, like putting it on another port for internal use only
    (Maybe this “issue” resolves itself by assigning the right roles (and ONLY them) to the server, like “Edge Transport”
  • The message tracking center is still a joke, I hope this gets changed after in the final release (err shouldn’t a beta be feature complete?)
    The only you get out of it are the lines in the logfile that match the give criteria, in a table that reminds me of the early days of VB 6.0

Let’s see what can be done in the MSH

I will continue to explore the capabilities of that new platform in the following days.

Interesting but Underrated

Monday, 27 March 2006

Microsoft Appeals Korea Fair Trade Commission Decision

First Mouse Without a Click, Silicone Padded…anguage_tools

First Martian HiRes Images

Solid State Disks can store up to 32 Gbytes

Second Life.. in no way an OS

Monday, 27 March 2006

My 0.02€ on what I read over here:

Being a "Platform inside a Platform" may be true, but then again every MMOG might become an OS. I for one am an avid player, as much as time allows, of EVE Online.

We have a player driven economy there, we have player owned structures, we have player run starbases, we have a ingame radio channel, we have out of game lootshop for ingame items/cash.. the list goes on.
Everybody is free, completly.
Given enough time invested we can recreate every situation we want.

But then, there is a red line, maybe THE red line.

All in all our online lives are entrys in a database, our characters, our ships, our assets. Everything can be reduced to a line in a table, referenced by or referencing to other lines in other tables.

Summing it all up:

We have an interface of medium complexity:
some mouse clicking, menues to a maximum depth of 3, a very intuitive style of chosing the own actions

With this interface we accomplish very complex tasks:
Fullfilling sell/buy orders, Killing pirates/other players for interstellar kredits (isk), running our starbases, defending our territories, having a good time online alltogether, having fun

The output of such complex tasks on the opposite is very simple:
It's our beloved lines in the database.

So, what good is a platform inside a platform as an OS when it just increases the complexity to get simple things done?

The Atlas Framework

Saturday, 25 March 2006

Words cannot describe how much this rocks.
Empowering people to write browser independent code including all ajax bells and whistles with *VERY* few lines of code make me very happy.

Check out the site, including a very cool demo video here:

DFS-R & Shadow Copies = DPM?

Friday, 24 March 2006

After reading this post I rembered that I tested the Data Protection Manager when it was still beta, I never thought about it much more, I use DFS-R for backup/data collection needs now.

How you ask? Well I guess I was thinking a bit out of the box when I designed this "backup" solution.

Let me start with the production server, I have roughly 800GB of data I have to backup daily and keep for at least 6 month. 5 month in weekly snapshots, the most recent 30 days in 12 hour snapshots.

So I set up shadow copies on the production server, one at 11:30am and one at 04:00am, that gives me my 30 days fine grain "backup".
For obvious reasons I wanted the long term backup be availible at all times, easily restoreable.
This is where DFS-R comes in. I replicate over gigabit ethernet to my backup server, using 128mbit during work hours and the full pipe during off peak hours.
The backup server creates a shadow copy once a week, giving me my > 6 month backup.

Things to consider:
The DFS staging folders on both servers are on their own physical RAID 10 arrays, so are the shadow copies.
Staging quota is set to 300GB, shadow copy storage to 300GB on the production and 500GB (more changes over time) on the backup server.

All in all I can say I really trust this solution. Yes, it's expensive but I know it's there when I need it 🙂

Time to see if the DPM can catch up with that level of convenience.

Nice one, Bill

Thursday, 23 March 2006

Watch Bill Gates as he gets very excited talking to Tim OReilly after his keynote speech at the MIX06