2005-03-19T23:24:44.260+11:00I picked up my new beast on Wednesday, it has taken me a couple of days to get around to posting some photos. It is a wonderful machine, I have yet to push it hard due to the new tires and the fact I still have to run the engine in.
2005-02-11T16:24:44.806+11:00I was pondering the other day exactly which applications I use each day, and for how long. I thought "I could write a tray app to track my open windows..." but I managed to think better of it an downloaded a trial version of a product called SmartTime Work Tracker (http://tracker.aklabs.com) that gives me beautiful graphics on work split...
2005-01-19T16:58:29.930+11:00After ditching Google Desktop in favour of MSN Desktop Search, I today realised the true power of these indexed searched apps. I not only index all my hard disks, but all of our corporate network drives. Today after trying to find some installation instructions for an obscure peice of software, I was told to look in an old network server that I do not index. I resorted to Windows Search to look for the file keywords on that drive.
2005-01-19T12:36:23.626+11:00Tech blogs rock. My favourites are Scott Hanselman, Don Box, Dan Appleman just to name three. Why do I like reading them so much? I liken it to the really smart person in your office coming up to you and telling you about something really cool that is useful to know. Then times that by a hundred people. But the best part is that with blogs, you can shut the door on all the office time wasters who want to impart 'wisdoms' on politics, sport or business.
2005-01-18T10:03:02.180+11:00I thought I'd post this in case it can help some poor soul out there who could potentially spend days debugging the same ASP.net problem I had...
This problem is caused by the viewstate of the page (which is stored in the page) getting too big. Most HTTP proxy servers have a limit of how much data they can send back to the server in one go. Due to the ever increasing level of data in the database, the datagrid gets bigger. Because the datagrid is stored (along with all the other viewstate enabled controls in the page) in viewstate, this hidden field has also got bigger. If you sniff the http post stream, you see that the output is truncated partway through the viewstate. The server receives a truncated post stream, and in this case sends back an unhelpful message.
Delete all your data to reduce the size of the datagrid.
However if your data is important to you or someone who pays you, the best trick is to store the viewstate on the server instead of in the page. The session variable is the best bet of course. Add these two methods to the page giving you troubles (not in any web user control(s) you may be using)
Protected Overrides Function LoadPageStateFromPersistenceMedium() As Object
Protected Overrides Sub SavePageStateToPersistenceMedium(ByVal viewState As Object)
Session("_ViewState") = viewState
Remeber that this is a workaround that will allow the page to function. Unless you are storing session state in sql server you shouldn't use this method for all of your pages to make them smaller. The session variable is stored in server memory and this will degrade performance by filling up RAM and could result in increased hard disk writing when the server has to write to a page file to store the session state. So use carefully, especially with high volume pages...
2004-12-01T10:17:29.206+11:00I was considering a post on open source for a few days, and then Scoble posted a little snippet which fired me into action... I think the concept of open source is wonderful. I always see it relating back to encryption technologies - you need to publish the algorithm publicly for scrutiny. I am not a fan of 'security by obscurity' in which if the method used to encrypt (the lock) is exposed, the lock can be picked. One of the main tenets behind open source that I love is that everyone gets to scrutinize the code for security, performance, stability and other factors. I call this open source testing. This is, however, as far as I can go with the open source movement. Open source development is a concept where a whole swag of developers get together and maintain a code base. For Linux there are thousands of these contributors. These people can all change the code base. Give the relevant Linux market share, if the same proportion of developers got on board an open source windows, there would be millions. Scoble makes a good point - who is regression testing all these changes? Who makes sure that the thousands of products made for windows still work? Having holus-bolus changes to the kernel by everyman and his dog is just dumb for these reasons. It is not as bad as it sounds, these changes are generally reviewed by a committee before being accepted for major builds (it doesn't stop someone from modifying their own version though - more compatibility issues). This kind of managed acceptance works well for bug fixes, but what about new development? With all those thousands of developers having their two cents worth and trying to agree on something, you end with a tech version of groupthink which doesn't see much hope of doing anything revolutionary to the way we run our systems. I like the idea that Microsoft is paying really, really smart people the likes of Don Box to come and create new ways of computing. Companies like Microsoft and IBM are working together to create the next generation of operating systems. Thank god. Many Linux zealots have the view that that OS is perfect, like some kind of divine creation that cannot be evolved, only refined. Surely in this day and age we can all agree that there is a better way to do everything, and we need to strive to find it. Ask anyone at Microsoft if they think that the Windows kernel is as good as the Linux kernel, and the will laugh. Of course it isn't. But there's a reason - when my mum installs windows xp, over windows 95, all her programs still work. That's the goal, to protect the people who use the stuff for productivity from drastic changes. That's why the windows still has a hard-coded fix for ARMYMEN.EXE to make sure it runs in the right graphics mode. That's also why MS scrapped WinFS from Longhorn and made sure Avalon and Indigo could run in windows XP. Because they care about their customers and understand that not everyone is a computer geek. Closed source development is crucial in order for people to be productive in their computing. The only thing that I could ask is that MS makes their codebase public (read-only) to increase security and stability, and relies on their ability to innovate instead of worrying about others stealing their secrets. [...]
2004-11-10T20:20:14.810+11:00I've only been on the coast for a month now, still time enough for me to make a few comparisons (some dreadfully obvious - sorry):
These are just a few things I have noticed... as time goes by I will be surely assimilated into the collective and my perception of normal will conform.
2004-11-10T10:19:25.703+11:00I have now settled into a life on the Gold Coast that sees me limited by local infrastructure to dialup speeds that were outdated 10 years ago. Let me whine about a few things to make myself feel better. Pair Gains Not only do these things exclude you from ADSL access (although they can be removed), they have got to be one of the most devious trade practices that I have ever come across. Basically, each home has a copper cable entering it from the street. That cable has 6 smaller wires inside of it. Once pair gain has been set up, a single phone line requires just 2 little wires. So effectively you could get 3 separate lines from your one copper cable. Great idea; it serves the needs of many. But to charge a whole extra line rental for this is very sneaky. Remember you also pay a couple of hundred bucks to get a bloke out to set it up, so that should be covered you would think... It is akin to the power company charging you double your service fee for each power point in your house. It costs Telstra next to nothing to maintain this line, it is just another entry in a database somewhere and all the switching takes care of itself. I would have thought a reduced service charge would be applicable for each additional line on that cable... you still pay for the calls, so where is the actual cost to Telstra to maintain this 'extra line'? If there isn't one, why are we being charged? A word starting with profit and ending with eering comes to mind. Please enlighten me if I am not seeing the whole picture. Remote Integrated Multiplexer Systems (RIMS) These unholy devices are a lesser example of Telstra's attempts to shaft the public, but a good demonstration of total lack of foresight. Basically a RIM is a big green cabinet you might see on the side of a street usually in a new estate. All the copper cables from everyone's houses are mashed into this thing. Out the other end, a few runs of fiber optic cable go to the local exchange. This way, Telstra can add new lines into the estate without having to run copper cables too far - just from the RIM to the house, not from the exchange to the house. One the surface, it sounds like a good way to provide a service without investing too much on the infrastructure. But these nasty devices usually prevent every house attached to it from getting ADSL. In fact, it (or pair gain I forget) generally restricts even dialup speeds to 28.8 kps. OK, so this sounds like the infrastructure is a bit behind the technology. So where's the profiteering? My main gripe is that the service charge is exactly the same for those poor disheveled folk living off a RIM as it is for those who enjoy higher bandwidth and ADSL capabilities. Why should you be charged the same price if Telstra can't deliver the same services? To Telstra's credit, they have endeavored to rectify this problem in a lost of NSW metro areas and major regional centers (press release). I can't accept the line that "delivering high speed internet services on normal phone lines was not contemplated" as an excuse - sure they didn't see it coming - but their unwillingness to fix it has been appalling. So are their claims that they are "on track to provide access to ADSL broadband to over 90% of customers nationally by the end of 2006". Ninety percent is not an acceptable number. You still have 10% of the population who are disadvantaged in the internet economy, hundreds of thousands of people unable to get access to broadband. Some of those people may a fast internet to do their job properly and competitively (like me) and Telstra has them at a disadvantage. There are also a whole lot of other whacky corner-cutting technologies employed to give people the basic phone service which limits their broadband access that I haven't covered here. My message to the telco behemoth is to open up the coffers filled by super-sized profits (a lot of it from t[...]
2004-10-29T21:12:48.026+10:00Well, not quite. As I expected, the poor state of communications infrastructure on the Gold Coast means that I am still without broadband, two weeks on. I am desperately missing my 10mb Optusnet cable which I took for granted. I am hoping that if all goes well I will be back online again at all the glory of 1.5mb - which beats hands down the 28.8k that I am struggling with at the moment. So, it seems that blogging is too much of a chore until I get the fat pipe back... I apoligise (to my massive readerbase) for not blogging for over a month now, but if all goes well you'll hear from me soon enough.
2004-09-20T20:01:10.266+10:00The topic of the metadata-driven (MD) data warehouse - along with the Common Warehouse Metamodel (CWM) - subject of code generation & logic rule systems fascinates me... Having working in corporate environments where the core business I was involved in was data validation & data warehouses, I can't believe more effort hasn't been directed to this area. The reason for my focus on this is that I am currently involved in delivering (in the 3 weeks I have left!) a MD code generation system which will use SQL code behind the scenes to build a relatively simple data warehouse with basic ETL steps... Why code generation for the data warehouse? First, I'll start by stating that I believe there are 3 symptoms that you can diagnose early to determine if code generation should be used on a project: Repeated logic - the same (or very similar) thing happening to different stuff everywhere. Hard coded logic rules - the description of "what" we do to our data isn't abstracted from the way the system actually works unconstrained conceptual entities - Things that are the conceptually same in two places but represented either differently or separately, don't share a relationship. So an initial approach of specifying user requirements, designing the data warehouse and building it iteratively was not as appealing to me this time around. Having been involved in a few data warehouse projects I have come to learn that the initial design is generally flawed and needs modification in order to be supported into the future. This approach is generally focused at the schema of the data warehouse. Spec the needs, design the schemas, build the schemas, and tack on some ETL logic to make it work. As almost a rule, it doesn't, but that's OK. That's what iterative development is all about. But in this case, the way the system works is already spliced into the schema of how we want the data presented to the user... This means that if we need to fundamentally change the way we treat our dimensions, we have to change reams of code and try not to break anything... and plenty of other nasties. At the system level, a typical system design goal is usually something like this - "Create a data warehouse that allows users to get access to data collected by the corporate financial data application..." The unfortunate temptation is to go off and spec a corporate financial data warehouse. But - what is "corporate financial"? The conceptual answer (eg the column names, data items, business rules etc) usually surfaces - Corporate financial is "Year","Month","Revenue" etc. So in many cases, the coding is also done at the conceptual level. The real answer to our question is that Corporate Financial is a data collection - that can be easily described with metadata as a collection of inputs, outputs, definitions, relationships, procedures and rules. It is about at this point that the whole thing turns sour - by ignoring the basic facts about the metadata of what we want to present, we lock ourselves into certain way of doing something, which can be a nightmare to develop, debug and deploy. The common result of this solution is that the project is never 'finished'. The end of the financial year rolls around, and the users want new items in the data collection... It's up to the developers and DBA's to hack the scripts to get them through. What kind of development team wants to be doing code maintenance ad infinitum? That detracts from time spent developing new stuff. The goal should be to pass the bulk of the logical & conceptual maintenance tasks back to the application owners. The MD approach generally takes longer that seat-of-your-pants coding, but delivers long term gain. It should also be noted that MD doesn't fix all bugs; it just limits their occurrence to bad modeling decisions, rather than coding errors. This type o[...]
2004-09-19T20:51:12.326+10:00This was most likely the last idle weekend i'll have before the moving activities really kick in. I spent it mosting doing reading and research on the web; that included catching up on some episodes of the .Net show and MSDN TV that I have been neglecting lately... also managed to wash the car and the bike. Watched two of the best games of footy I've seen all year, St Kilda v Port Adelaide and Brisbane v Geelong on friday/saturday night respectively.
2004-09-15T19:26:19.963+10:00I made my regular jaunt to the bookstore today, where I usually browse the bargain bins for any gems that are hidden away once a week or so. I don't spend a heap of money on books; I will buy the odd new one for full price if it is a real gem, or if it is pertinent to a solution I am working on or researching. I tend to buy the discounted books that are still relevant...
Apparently the stock that was discounted didn't last... and word is there were a few scraps over the remaining few...
2004-09-13T10:46:24.046+10:00Amanda and I took this weekend to go down to Blairgowrie where we planned on having what will most likely be our last relaxing weekend before we move. We stayed in a house belonging to a family friend, which was great apart from trying to get the woodheater going at 9pm on friday night (no central heating).
2004-09-08T17:20:29.950+10:00It is official - Amanda and I are moving to Queensland...
2004-09-07T17:20:42.626+10:00I am officially a huge fan of Charles Petzold.
2004-09-07T17:21:50.370+10:00So I had a go at setting up .TEXT on my pc at home (running Windows XP SP2) hosted through my 1.5mbs Optusnet Cable connection. The thing was, Optusnet have blocked port 80 due to all the worms that were (and probably still are!) floating around the windows world. So I had to fire up my site on a different port (I used 8080) which is a pain to link to as we all know.
2004-09-07T17:22:06.713+10:00I made a new-financial-years-resolution to start blogging... Here it is albeit late...