Thursday, September 28, 2006

Automated Workflow Environments and EMR

Well, we work in the next era of software development, not only designing applications, but also developing systems that communicate with each other, thus participating in a workflow.

Automating this workflow through the seamless integration of these apps is a task that challenges many of the industries that we work in.

Automated Workflow Environments are those systems where multiple systems contribute, collaborate and communicate to enable a network of these apps to actually solve complex problems very efficiently, with no human interaction. You can call them Digital Ecosystems.

You can construct workflow nets to describe the complex problems that these systems efficiently solve. Workflow nets, a subclass of Petri nets, are known as attractive models for analyzing complex business processes. Because of their good theoretical foundation, Petri nets have been used successfully to model and analyze processes from many domains, like for example, software and business processes. A Petri net is a directed graph with two kinds of nodes - places and transitions - where arcs connect 'a place' to 'a transition' or a transition to a place. Each place can contain zero, one or more tokens. The state of a Petri net is determined by the distribution of tokens over places. A transition can fire if each of its inputs contains tokens. If the transition fires, i.e. it executes, it takes one token from each input place and puts it on each output place.

In a hospital environment, for example, the patient journeys and processes involved, show a complex and dynamic behavior that is difficult to control. The workflow net which models such a complex process provides a good insight into it, and due to its formal representation, offers techniques for improved control.

Workflows are case oriented; each activity executes because of the case. In a hospital domain, a case corresponds with a patient symptom and an activity corresponds with a medical activity. The process definition of a workflow assumes that a partial order or sequence exists between activities, which establish which activities have to be executed in what order. Referring to the Petri net formalism, workflow activities are modeled as transitions and the causal dependencies between activities are modeled as places and arcs. The routing in a workflow assumes four kind of routing constructs: sequential, parallel, conditional and iterative routing. These constructs basically define the route taken by 'tokens' in this workflow.

Well, enough theory, how does this apply?

Think of this in practical terms using the example of an EMR* or CPR* System or HIS* System:
• A patient arrives at a hospital/clinic for a consultation or particular set of exams or procedures.
• The patient is registered using key demographics, if new to the hospital. A visit or encounter record is created in the Patient Chart (EMR) - with vitals, allergies, current meds and insurance details.
• The physician examines the patient and orders labs, diagnostic exams or prescription medications for the patient possibly using a handheld CPOE*
• The patient is scheduled for the radiliogy exams using the RIS - radiology info system; samples and specimens are sent to the LIS - laboratory info system or HIS (hospital info system)
• The RIS or CIS or LIS or HIS sends notifications to the Radiology and/or Cardiology and/or Lab or other Departments in the hospital through HL7 messages for the various workflows.
• The various systems in these departments will then send HL7 or DICOM or proprietary messages to get the devices or modalities, updated with the patient data (observations, restrictions, prior history, etc.)
• The patient is then taken around by the nurses to the required modalities in the exam/lab areas to perform the required activities. There may be lab reflex orders created to complete the lab procedures.
• The patient finishes the hospital activities while the diagnosis and study continues in the background with the care teams; these entire datasets are coalesced and stored in rich structured reports or multimedia formats in the various repositories - resulting in a summary patient encounter/visit record in the Electronic Patient Record in the EMR database along with possible follow up future visits and prescriptions.
• There could also be other workflows triggered - pharmacy, billing [based on CPT codes], etc.
• The above is just the scenario for an OUTPATIENT or Walk-in; there are other workflows for INPATIENT - ED/ICU/other patients.

The key problems in this 'Automated Workflow Environment' are:

• Accurate Patient Identification and Portability to ensure that the Patient Identity in the EMR is unique across multiple systems/departments/clinics and maybe hospitals. The Patient Identity key is also essential to Integrating Patient healthcare across clinics, hospitals, regions (RHIO) and states.
• Support for Barcode/RFID on Patient Wrist Bands, Prescriptions/Medications, Billing (using MRN, Account Number [and Visit/Encounter Number] and Order Number), etc to enable automation and quick and secure processing.
• Excellent interfacing with all primary ADT, Orders, Results, Charges, Supply Chain Management, Medication dispensing systems, Provider database systems, Registries, Financial software and IT Management and Transaction Tracking systems
• Quick Patient data retrieval and support for parallel transactions
Audits and Logs for tracking access to this system
• Support for PACS, Emergency care, Chronic care (ICU / PACU), Long Term care, Periodic visits, point of care charting, meds administration, vital signs data acquisition, alarm notification, surveillance for patient monitors, smart IV pumps, ventilators and other care areas - treatment by specialists in off-site clinics, Tele-Health, refills at Pharmacies, etc.
• Support for Care Plans, Order sets and Templates, results' tracking and related transactions.
• Quick vital sign results and diagnostic imaging/reporting with role based access to appropriate patient data.
• Effective display of specialty content - diagnostic/research images and structured "rich" multimedia reports.
Secure and efficient access to this data from the internet (home, affiliate clinics, other facilities)
Removal of paper documentation and effective transcription through support for digital scanning of old documents and accurate linking to the patient charts.
SSO-Single Sign On, Well defined Security roles, integrated Provisioning and Ease of use for the various stakeholders - here, the patient, the RN, physician, specialist, IT support, vendor maintenance, etc.
Seamless integration with current workflows with easy extensibility and support for updates to hospital procedures/processes/equipment/regulations.
Modular deployment of new systems and processes with a long term roadmap and strategies to prevent costly upgrades or vendor changes or technology limitations.
HIPAA, HL7, JCAHO/JCI and Legal compliance - an entire set of guidelines - privacy, security being the chief ones. HL7 compliance implies easy integration with other hospital/clinic systems. A hospital with HIMSS Stage 7 certification implies portable digital data end to end, with reduced paperwork and a seemless patient experience, without cumbersome physical documentation, with support for convenient online appointments, online patient results, online provider access,  online prescription refills, regulatory compliance, reduced error, etc.
• Efficient standardized communication between the different systems either via "standard" HL7 or DICOM or CCOW or proprietary.
• Seemless support for integration with a High speed Fiber network system for high resolution image processing systems such as MRI, X-Ray, CT, etc.
• A high speed independent network for real time patient monitoring systems and devices
• Guaranteed timely Data storage and recovery with at least 99.9999% visible uptime
• Original Patient data available for at least 7 years and compliance with FDA/regulatory rules.
Disaster recovery compliance and responsive resilient performance under peak conditions. Redundancy and Disaster Recovery implies that the system is easily recoverable without significant operational disruptions. Monitoring historic downtime trends and reviewing of downtime strategies apart from periodic verification of hot and cold recovery in off peak hours would ensure confident stress free adoption by the patient care teams.  
• Optimized, on demand and elastic data storage ensuring low hardware costs
Plug 'n' Play of new systems and medical devices into the network, wireless interference-free communication among the vital signs devices and servers, etc.
Location tracking of patients, providers and devices (RFID based) within the hospital
Centralized viewing of the entire set of relevant patient data - either by a patient or his/her physician
Correction of erroneous data and merging of Patient records.
• support for restructuring existing hospital workflows and processes so that this entire automated workflow environment works with a definite ROI and within a definite time period!
• Integration with billing, insurance and other financial systems related to the care charges.
• Integration with Business Intelligence systems for statistical comparisons, quality indicators, patient feedback, efficiency tracking, fraud prevention, epidemic prevention, anonymized research and reasonable prediction of outcomes. 
• Support for De-Identification or Anonymization for research studies.
Future proof and support for new technologies like Clinical Decision Support (CDSS) - again, a long term roadmap is essential.
Multi-lingual user interface and international data exchange possibilities
ROI: How does a hospital get timely returns on this IT investment?
  1. Minimization of errors - medication or surgical - and the associated risks
  2. Electronic trail of patient case history available to patient, insurance and physicians
  3. Reduced documentation and improvement in overall efficiency and throughput
  4. Patient Referrals from satellite clinics who can use the EMR's external web links to document on patients - thus providing a continuous electronic report
  5. Possible pay-per-use by external clinics/pharmacies - to use EMR charting/e-refill facilities
  6. Remote specialist consultation
  7. Efficient Charges, Billing and quicker settlements
  8. Better Clinical Decision Support - due to an electronic database of past treatments
  9. In the long term, targetted efficiency and essential innovation provides for better preventive care implying cheaper insurance and optimized treatment ensuring  fewer bills and reduced claim denials, thus leading to volume income, increased patient satisfaction with referrals to friends and relatives, providing long term sustainability in fiercly competitive environments and reducing state expenditure while improving care quality in socialist healthcare regions.
  10. Better compliance of standards and policies - HIPAA, JCI, privacy requirements, security 
  11. Reduced workload due to Process Improvement across departments - ED, Obstetrics/Gynecology, Oncology/Radiology, Orthopedic, Cardiovascular, Pediatrics, Internal Medicine, Urology, General Surgery, Ophthalmology, General/family practice, Dermatology, Psychiatry
  12. Improved Healthcare with Proactive/Preventive Patient Care due to CDSS.
  13. Efficient usage of Providers, Hospital Beds and Outpatient Devices/Resources through Predictive Planning using real time data and historic visit information - leading to improved outcomes for multiple patients receiving predictable and quality care in paperless and advanced medical centers. 
  14. Quality of Patient Care: A silent factor of a hospital's revenue is quality of patient care. One of the chief drivers of quality of patient care is the quality of information provided efficiently to the Physicians though which they can make those critical decisions.
  15. Performance Metrics or KPI's: Any hospital system needs efficient SMART goals that are periodically evaluated and improved upon:
    • Operational: Average Length of Stay/Volume Metrics, Time to service, Hospital Incidents/Infections, Patient Satisfaction, Physician performance, Patient readmission rate, Inpatient mortality rate, Operating Margin/Financial Metrics, Bed Occupancy Rate/Revenue Leakage Metrics/No-shows, Asset Utilization Rates/Inventory Turns, Service Requests
    • Server: CPU/RAM utilization report, (avg. time of disc transfer, I/Os per disc, Disc queues per spindle, average login time, average report display times, number of simultaneous users, Availability, Network Latency
    • Vendors: Service Requests, Incidents, SLA Response times
  16. Improved Emergency care through real time hospital preparedness, apart from reduced ambulance travel time
  17. Reduced Claims Denials: A hospital system that is well integrated and optimized for improved patient care, outcomes and experiences all the way from supplies procurement through optimal utilization, and with accurate documentation implying accurate fulfillment and accurate patient diagnosis and treatment processes (with the possible usage of Artificial Intelligence to feedback past/historical experience data into front end systems) finally leads to reduction in Claims Denials.
  18. Standards compliance: Innovative future-safe solutions need to be standards compliant since these standards are regularly reviewed by expert committees, vendors and regulators.
  19. Improved Community Health: Corporate Social Responsibility for regions implies better governance through reduced epidemics and better public health, mainly achievable through central health data repositories that receive realtime anonymized patient data from the various regional hospitals' EMR systems; the centralized health system in turn can provide alerts that prevent and restrict outbreaks in realtime apart from providing regional best practices and medication alerts to these subscribers.
  20. Customization and Adoption:  Technology or Innovation can only be as good as the customization involved for a seamless provider-patient experience. Too many clicks or screens can and will convert the clinical documentation task into painful detective work. The Digital Transformation of Healthcare when done right, has already proven invaluable (Epidemic detection and reduction, Standardized quality of care for common ailments, Knowledge base of Expert techniques, Well documented case histories, Anonymized Research for future improvements and life saving techniques, Medicare fraud prevention, Prevention of Insurance headaches and paperwork, Global digital access for patients to their prescriptions-records-scans-physicians, telemedicine, etc.). Over a reasonable time/effort, the healthcare system EMR would be customized to provide the best provider-patient interaction experience, similar to the pre-digital era. Experienced EMR implementation teams need to advise the best in class technology available for the requirements and demographics/economics involved.    
Now, the big picture becomes clear.

Doesn't the above set of requirements apply to any domain? This analysis need not be applicable only to a hospital domain, the same is true for a Biotech domain (where orders are received, data is processed, analyzed, and the processed data is presented or packaged). Similarly a Manufacturing Domain, Banking domain or Insurance Domain etc.

The need is for core engine software - based on EDI (Electronic Data Interchange) - that integrate and help in the Process Re-Engineering of these mini workflows securely and effectively and using common intersystem communication formats like X-12 or HL7 messages.

These Workflow Engines would be the hearts of the digital world!

*EMR - Electronic Medical Record
*CPR - Computerized Patient Record
*CDSS - Clinical Decision Support
*RHIO - Regional Health Information Organization
*CPOE - computerized physician order entry

Some of the information presented here is thanks to research papers and articles at:
*EMR Adoption Model
*Common Framework for health information networks
*Discovery of Workflow Models for Hospital Data
*Healthcare workflow
*CCOW-IHE Integration Profiles
*Hospital Network Management Best Practices
*12 Consumer Values for your wall

Healthcare Exchanges are the future:
   Imagine a scenario, wherein a patient can use his healthcare card to receive care in any hospital in his state, province or country.  Healthcare Information Exchanges (HIE) are data aggregating and messaging systems that provide population healthcare data, unified patient lookup, medication/procedure reviews, epidemic monitoring, anonymous patient research, cost savings through fraud prevention, optimal e-prescription systems, best care for current symptoms, clear billing comparisons, online regional provider appointments, insurance marketplaces, etc.; patients would also get to choose the best insurance provider from this health exchange using detailed statistics, patients could opt for cost effective and optimal treatment plans and patients would have true portability.    

    Successful HIE implementations have common threads of success:​
  •     They meet and balance many stakeholder needs​
  •     They are flexible and enable the continuous discovery of niche areas that further add significant value to their stakeholders and community​
  •     Their passionate leaders continually discover new sources of funding (both public and private) to remain financially viable, at least initially​

    The several Challenges and HIE Implementation Risks include:​
  •     Lack of interoperability standards in existing infrastructure and legacy vendors are sometimes slow to acceptance
  •     Lack of available and hospital facility IT resources, etc., lending to longer integration times and escalating costs​
  •     Competing priorities in the hospitals/departments during the HIE rollout can also slow down the implementation​
  •     Rapidly changing technology shifts may provide alternatives to the services that HIEs offer or plan to offer​
  •     Possible duplication and reimplementation of existing applications of the current ecosystem could indirectly increase scope​
  •     Slow Integration into existing workflows and slow adoption of the HIE post go live would be detrimental​
  •     Slow pace of HIE service adoption by clinical and health policy stakeholders​
  •     Rapidly escalating costs due to premium vendor costs​
  •     Fading local and federal government financial sponsorships​   

    The solutions include:​
  •     Prioritizing and appropriately phasing the HIE implementation. Timely and careful implementation of high value bundles of HIE services would help drive adoption and ensure sustainability​
  •     Understanding and responding to stakeholder/customer needs on an ongoing basis and offering products and services that solve their problems is a win-win​
  •     Achieving early value in HIE implementations builds trust driving open, transparent growth ​
  •     Leveraging lightweight and flexible technologies 
  •     Consistent implementation of interoperability standards is fundamental to achieving widespread HIE adoption​
  •     Identifying and reaching a critical mass of connected providers and hospitals/clinics/pharmacies is key
  •     Further, business intelligence and data analytics over the HIE would support actionable, effective clinical decision making, thus driving quality and cost improvement and transforming healthcare to pay-for-value​
The HIE implementation ROI can be measured through clinical outcomes, clinical process outcomes, financial savings, and user satisfaction apart from providing the government with valuable population health metrics including improvements in healthcare standards, epidemic and medication control, optimal healthcare delivery, etc. The typical metrics include: Utilization (exchange transactions); Participation (engagement of stakeholders); Clinical outcomes (quality indicators); Financial (cost avoidance and fraud prevention); and Users (active providers using system). Clinical process measures, including workflow, and measures of provider or patient satisfaction would indicate lower adoption of the HIE initially, since it would more typically be more frequently used during clinical reviews and policy planning apart from insurance markets/providers. Clinical outcomes, financial measures, measures of satisfaction, and clinical process measures should quickly ramp up to exhibit the highest reported usages in future.

What about the latest IT trends and their applications in healthcare?

We already know about Google Earth and Google Hybrid Maps and the advantages of Web 2.0
The next best thing is to search the best shopping deal or the best real estate by area and on a hybrid map - this recombinant web application reuse technique is called a mashup or heat map.
Mashups have applications in possibly everything from Healthcare to Manufacturing.
Omnimedix is developing and deploying a nationwide data mashup - Dossia, a secure, private, independent network for capturing medical information, providing universal access to this data along with an authentication system for delivery to patients and consumers.

Click on the below links to see the current 'best in class mash ups'
*After hours Emergency Doctors SMS After hours Emergency Doctors SMS system - Transcribes voicemail into text and sends SMS to doctors. A similar application can be used for Transcription Mashup (based on Interactive Voice Response - IVR): Amazon Mturk, StrikeIron Global SMS and Voice XML
* Calendar with Messages Listen to your calendar + leave messages too Mashup (based on IVR): 30 Boxes based on Voxeo , Google Calendar
* Housing/Climate/Jobs/Schools
* Visual Classifieds Browser - Search Apartments, visually
* - Real Estate/Home pricing
* - Rent comparison
* Weather maps
* - Free comparison shopping with your cell phone
* - 10 days weather forecast - worldwide map
* - Find matching music/movie based on "genre" in a visual way
* - Rent/Real Estate/Home pricing - linked to Craigslist
* - Google Maps + Travel Videos
* - Wheel of Zip Code based restaurants
* More sample links at this site (unofficial Google mashup tracker) includes some mentionable sites :
* latest news from India by map
* read news by the map - slightly slow
* view news from Internet TV by map -
* see a place in 360

What's on the wish list ? Well, a worldwide mashup for real estate, shopping, education, healthcare will do just fine. Read on to try out YOUR sample...
OpenKapow: The online mashup builder community that lets you easily make mashups. Use their visual scripting environment to create intelligent software Robots that can make mashups from any site with or without an API.
In the words of Dion HinchCliffe, "Mashups are still new and simple, just like PCs were 20 years ago. The tools are barely there, but the potential is truly vast as hundreds of APIs are added to the public Web to build out of".
Don also covers the architecture and types of Mashups here with an update on recombinant web apps

Keep up to date on web2.0 at

Will Silverlight and simplified vector based graphics and workflow based - xml language - XAML be the replacement for Flash and JavaFX?

Well, the technology is promising and many multimedia content web application providers including News channels have signed up for Microsoft SilverLight "WPF/E" due to the light weight browser based viewer streaming "DVD" quality video based on the patented VC-1 video codec.

Microsoft® Silverlight™ Streaming by Windows Live™ is a companion service for Silverlight that makes it easier for developers and designers to deliver and scale rich interactive media apps (RIAs) as part of their Silverlight applications. The service offers web designers and developers a free and convenient solution for hosting and streaming cross-platform, cross-browser media experiences and rich interactive applications that run on Windows™ XP+ and Mac OS 10.4+. There is a Silverlight for LINUX too (Moonlight-Mono)

The new way to develop your AJAX RIA "multimedia web application" is - design the UI with an Artist in MS Expression Blend or Adobe Illustrator and mashup with your old RSS, LINQ, JSON, XML-based Web services, REST and WCF Services to deliver a richer scalable web application.

Monday, September 18, 2006

Is it Ubuntu or OpenSuse or PCLinuxOS time?

Not yet tired of windows? Well, read on to find out the next gen O/S out there with easy in-place upgrades...
* New capabilities - create a custom remastered Live CD/DVD/USB/SD and boot from a USB storage device (flash, SD, hard drive, etc). And, with Ubuntu 9.10, OpenSuse 11.2 and Fedora 10 - create a custom Live 64 bit linux CD/DVD/USB using remastersys or makeSusedvd or revisor .

Let’s see where do I start...
Well, my machine crashed - it’s a new Acer - -4152 NLCI (4150 or 4650 series) laptop- and the problem was maybe some virus, because the ntoskrnl.exe was corrupted, and the drivers just stopped loading after a few seconds of boot time, bringing the hard disc to a halt and the screen to a frozen white screen (talk about the old blue screen being upgraded by MS!!!).
Time to look around for a boot-able and repair kit right! I tried - got an XP cd and installed XP onto a different partition - but the next day - that too went into the same loop like the above.
Then, I found out that maybe the hard disc may have got some permanent damage - thanks to all the travel lately.
I look around for a solution other than DOS obviously and I find the best free O/S ever!
Knoppix -- the free open source Linux O/S that boots and runs from a CD/DVD (like Ubuntu 9.10 64 bit, PCLinuxOS), with automatic hardware detection, hibernate and suspend support, recognizing all the devices - SCSI, Graphics Card, Sound Card, USB, CD, DVD, RAID, Modem, Lan Card, printer-scanner(hp all-in-one), pcmcia cards, sd, bluetooth, wireless, modem, bluetooth, phone, PDA, iPod, external camera, webcam - you name it and its a solid system that detects, installs and boots fast (Ubuntu boots in 12 seconds, PCLinuxOS boots in 30 seconds), and runs quickly on ordinary hardware! It can even mount my NTFS hard disc in read-write mode (ntfs-3g driver supports read and write of NTFS files) to copy files to my USB so my critical files are saved in no time. Of course, part of the recovery process is being able to write CD and DVD's which Knoppix (Ubuntu, OpenSuse and PCLinuxOS) supports with a right click menu.
Knoppix - a flavor of Debian Linux - or - is a 640MB bootable Live CD with a collection of GNU/Linux s/w, which brings up a Windows like user interface, connects to the internet (after a minor config) through a DSL or Cable modem and voila - I'm online via Mozilla, Opera or Konqueror. Knoppix 6.0.1+ is a debian flavour of the Linux O/s with the Linux kernel 2.6.28 + , KDE 3.5.5+/GNOME 2.16+, ntfs-3g and it has all the flavours of a true windows system with OpenOffice3+ - (Word-documents-docx also,Excel-spreadsheets,PPT-presentations) , PDF reader, kaffeine (media player) and GIMP (like mspaint), picasa from google, a host of ntfs data recovery tools and good basic games!
PCLinuxOS - is a free open source Linux O/S (flavor of Mandrake/Mandriva Linux with Linux kernel, KDE 4.3.2+/GNOME 2.28+, Open Office 3+, 3D windows support), has all the above software modules, and it is much better than the Knoppix version as of PCLinuxOS 2007 TR4 versus Knoppix 5.1.1 - and PCLOS can be remastered and installed on a usb flash drive very easily like Ubuntu.
Live CD Knoppix (like Live CD Ubuntu and Live CD PCLinuxOS) uses on-the-fly decompression to load the required modules into memory, from a bootable CD/DVD - the CD/DVD is locked by knoppix and you can't use it for writing or DVD viewing - although you have the CD/DVD writer software and the movie viewer software and you can use an external DVD/CD writer/reader to perform CD/DVD burns/reads. You can install the PCLinuxOS and the above flavors to a bootable USB flash drive
The other pluses to these Live CDs are: There is already an available messenger - thanks to secure GAIM (now Pidgin) [ and the newest flavor is Empathy IM with support for voice and video other than skype for linux ] - which can connect to Yahoo, Google Chat, AOL, etc. among others. skype supports voice over ip calls and also video calls,  Kopete also has WebCam support, but Empathy is fast catching up with a much better UI. You can also create custom bootable CDs/DVDs/USB flash/USB drives, since the default live CDs are built for a read-only O/s (from the CD) with default options.
Want to listen to quality music? - using StreamTuner on Linux you can listen to live internet streamed (128 to 256kbps) quality radio on amarok/audacious/xmms2, from around the World for free!!!
Security - you can install the necessary Mozilla addon's and firestarter or shorewall or a custom firewall to boost your internet experience.
VLC, amarok, audacity, mplayer, realplayer and xmms2 are very good multimedia programs covering all the needed experience.
Acrobat 9+ is available for Linux for the pdf community.
Open Office or Star Office are not perfect but are decent Linux office solutions.
If you are wondering about install time - there is none - since the OS just boots of a CD/USB, you can use all the features of a full fledged O/S, and if you need, you may install the O/S to hard disc in 10 minutes.

So what are the minimum requirements of this new O/s (Vista beware!)
· Intel-compatible CPU (i486 or later),
· 32 MB of RAM for text mode, at least 96 MB for graphics mode with KDE (at least 128 MB of RAM is recommended to use the various office products),
· boot-able CD-ROM drive, or a boot floppy and standard CD-ROM (IDE/ATAPI or SCSI),
· standard SVGA-compatible graphics card,
· serial or PS/2 standard mouse or IMPS/2-compatible USB-mouse.

Before you comment, please note:

  • I know I could have called Acer - support - since my laptop is within warranty - well, I didn't call because I needed internet connectivity - not further delays and postal issues & mailing my hard disc.
  • Ubuntu, OpenSuse, Knoppix and PCLinuxOS have very good multi-session KDE4.3+/GNOME 2.16+ windows environment which I wanted compared to a lite-r environment like DSL Linux ( - that's another topic - 50MB linux - usb/mini-cd boot) - another reason not to go for a usb boot was the acer laptop bios didn't support usb boot and also the BIOS was not upgraded and the only way to upgrade the acer laptop bios was through a working Windows XP !! Latest: I finally got a usb floppy drive and got my Acer BIOS upgraded, also I created a custom Live CD of PCLinuxOS and copied it to a USB flash (don't forget to write the MBR using ms-sys) - now a Gateway MX 6124 or Toshiba or HP or Dell laptop boots perfectly through a usb 2.0 flash drive to Live PCLinuxOS 2007/2009. Now, you are even able to remaster Ubuntu 64 bit  9.10 (Karmic) and OpenSuse 11.2 64 bit using remastersys.
  • I downloaded Knoppix 3.6 Live CD from a friends cable connection and burnt the CD in 10 minutes and was online on my Acer 4152 NLCI laptop in 15 minutes: Latest: I downloaded Knoppix 6.0.1 Live CD and PCLinuxOS 2009.2 Live CD - Knoppix has 64bit support and we are still awaiting PCLinuxOS in a 64 bit avataar. Ubuntu 9.10 (Karmic) and OpenSuse 11.2 both support 64 bit live editions which work very fast straight out of the box with all the necessary apps for your new laptop.
  • All of the needed drivers and software including Gaim, Office(Word,Excel,PPT), PDF viewer, printer-scanner drivers, rss reader - were on the CD so nothing to install - unlike ms windows, where a plain o/s is useless!!!
  • For a new 64 bit system - the best Linux LiveCD/Desktop O/S is Fedora 10 (Fedora 11 has kernel/video issues) as it is now easily customizable (unlike Ubuntu) into a new LiveCD using revisor.
  • With a little customization PCLinux 2009.2 with kernel and the NVIDIA 190.42 Linux (32/64 bit) driver, runs great on Dell XPS 1530 with NVIDIA GeForce 8600 GT. The fingerprint scanner has a few problems with Linux, but its the upek driver and the same issue is on windows vista too!!!
  • The laptop CD/DVD drive will be used/locked and you can't eject the live CD/DVD - when you log into the Live O/s. This is by design, since the modules are dynamically loaded from compressed O/s on the CD/DVD. You can burn custom CD's by downloading the necessary softwares by apt-get after installing it to the hard disc. Then you can create a remastered custom Live CD or a Live USB Flash bootable (using a 512MB+ usb flash).
  • I downloaded PCLinuxOS 0.93 and later upgraded to PCLinuxOS 2009.2 and they install to hard disc easily with a single click, also they have a neat way to create a custom Live CD/DVD, which can be created in 1.5 hours (2.6 GB+ Live DVD) using mklivecd/remasterme/remastersys/k3b(brasero).
  • Webcam with messenger is an issue with gaim, fixed this problem with a new version of Kopete and many more web camera tools like cheese.
  • With Broadcom 4318 driver, WPA Wireless security is not supported with default ndiswrapper driver install, you may still have to follow instructions here(ignore default and download the 4318 Broadcom driver and 'build' the linux broadcom wireless card drivers  with support for secured 128 bit WEP and WPA2-PSK (BCM43*- cards).
  • Latest: DELL XPS 1530 , 4GB RAM , NVIDIA 8600GT GeForce video card, 1280x800 resultion, HD sound, 2M Pixel integrated WebCam - quad boots Windows Vista 32bit Premium and PCLinuxOS 2009.2 (kernel, with "i8042.nomux=1" boot parameter (for trackpad) and nvidia-current video drivers, and ndiswrapper (sky2) drivers for 10/100 ethernet and DELL Wireless a/b/g/n (Broadcom 4328/wl) with WPA2-PSK security, "cheese" for webcam, Ubuntu 9.10 64 bit (Karmic) and OpenSuse 11.2 64 bit. Secure wireless (WPA) works flawlessly with modprobe ieee80211_crypt_tkip.
  • The apt-get feature (combined with Synaptic of PCLinuxOS) of most Linux flavors (yast in OpenSuse) is great to keep your specific O/S features upto-date and remove broken packages. The control you have over your custom machine software is simply great.
  • I can't access the SD Card inserted in the proprietary Acer Laptop Texas Instruments SD/MMC Card Reader - but I can connect the Kodak digital camera via USB and the photos can be uploaded. Latest: PCLinuxOS has a fix for many Gateway laptop SD card reader issues in their support forums for most SD cards and multi card readers except Sony memory stick. Ubuntu 9.10 (Karmic) and OpenSuse 11.2 both support 32bit and 64 bit live editions which work very fast straight out of the box with all the necessary apps for your old laptops too. 
  • Ubuntu Installation with custom apps is made easy with a wiki
  • I can't read MS Visio or MS Project documents (but this in development and I will use visio or project docs converted to jpegs till then). There are lighter versions in K Office.
  • Open Office or K Office are not yet very mature software and cannot reasonably compare to MS Office 2007 in either features or printing of Excel documents. KOffice is nice enough, and much 'lighter' but it isn't quite as powerful as Open Office. Abiword is a good word document software.
  • PCLinuxOS has Beryl which is a decent 3D windows manager with nice 3D windows effects - but 3D windows is still not a very refined concept and I would suggest uninstalling the compiz 3D and beryl 3D software.
  • I have recovered 3 machines so far from fatal non-bootable windows with PCLinuxOS and its forums.
  • For safety with the MBR (master boot record), I'd suggest installing grub, because recovery is very easy and grub supports Vista perfectly (make sure you shrink the windows volume in Vista itself before installing linux to hard disc). Latest: grub2 is available for Ubuntu 9.10 with better features. You can also shrink disks using linux gparted (after defragmenting using PerfectDisk - which works better than Windows Defrag). For simplified grub2 recovery - please keep a custom Live CD ready with chroot and grub installed.
  • Support Forum for Sound Card Troubleshooting 
  • Flashplugin for 64bit Linux is still in pre-release 
  • Mplayer and plugins for video (Mandriva-PCLinux)
  • NVIDIA has latest (190.42) 64 bit display drivers for Linux
  • Latest: Install Virtualbox on your existing linux system, and virtual run your 32-bit/64-bit live-iso of your favorite linux flavor to try it out. qemu is also a good virtualization software.
Well, that's it from me, see ya... Keep your  Ubuntu or OpenSuse or  PCLinuxOS or Knoppix or  Fedora CD/DVD/USB/SD ready.... as we say in Linux there is true consumer choice even though I personally vote for  Ubuntu or OpenSuse or  PCLinuxOS (in that order :-) )

Oh ya.. Knoppix supports clusters and a multi-computer version is out called "ParallelKnoppix" which converts a host of windows machines into a Linux Cluster Farm. Descriptions are here -

Ubuntu/OpenSuse/Knoppix 64bit natively supports 64-bit processors and 4GB + memory.
Howto? - Another useful site to learn to use Linux -
Some more flavors that receive good desktop Linux reviews are
Other good sites with helpful linux links:

Wednesday, February 15, 2006

Code Review Checklist

Following is a check list I refer to often which catches many issues often:

1. No errors should occur when building the source code. No warnings should be introduced by changes made to the code. Also, any warnings during the build should be within acceptable boundaries with good reasoning.

2. Each source file should start with an appropriate header and copyright information. All source files should have a comment block describing the functionality provided by the file.

3. Describe each routine, method, and class in one or two sentences at the top of its definition. If you can't describe it in a short sentence or two, you may need to reassess its purpose. It might be a sign that the design needs to be improved and routines may need to be split into smaller more reusable units. Make it clear which parameters are used for input and output.

4. Comments are required for aspects of variables that the name doesn't describe. Each global variable should indicate its purpose and why it needs to be global.

5. Comment the units of numeric data. For example, if a number represents length, indicate if it is in feet or meters.

6. Complex areas, algorithms, and code optimizations should be sufficiently commented, so other developers can understand the code and walk through it.

7. Dead Code: There should be an explanation for any code that is commented out. "Dead Code" should be removed. If it is a temporary hack, it should be identified as such.

8. Pending/TODO: A comment is required for all code not completely implemented. The comment should describe what's left to do or is missing. You should also use a distinctive marker that you can search for later (For example: "TODO:").

9. Are assertions used everywhere data is expected to have a valid value or range? Assertions make it easier to identify potential problems. For example, test if pointers or references are valid.

10. An error should be detected and handled if it affects the execution of the rest of a routine. For example, if a resource allocation fails, this affects the rest of the routine if it uses that resource. This should be detected and proper action taken. In some cases, the "proper action" may simply be to log the error and send an appropriate message to the user.

11. Make sure all resources and memory allocated are released in the error paths. Use try-catch-finally in C++/C# code. Is allocated memory (non-garbage collected) freed? All allocated memory needs to be freed when no longer needed. Make sure memory is released in all code paths, especially in error code paths. Unmanaged objects such as File/Sockets/Graphics/Database objects in C#/Java need to be destructed at the earliest time. File, Sockets, Database connections, etc. (basically all objects where a creation and a deletion method exist) should be freed even when an error occurs. For example, whenever you use "new" in C++, there should be a delete somewhere that disposes of the object. Resources that are opened must be closed. For example, when opening a file in most development environments, you need to call a method to close the file when you're done.

12. If the source code uses a routine that throws an exception, there should be a function in the call stack that catches it and handles it properly. There should not be any abnormal terminations for expected flows and also the user should be informed of any un-recoverable situations.

13. Does the code respect the project coding conventions? Check that the coding conventions have been followed. Variable naming, indentation, and bracket style should be used. Use FXCop and follow C++/C# coding conventions/guidelines within acceptable limits.

14. Consider notifying your caller when an error is detected. If the error might affect your caller, the caller should be notified. For example, the "Open" methods of a file class should return error conditions. Even if the class stays in a valid state and other calls to the class will be handled properly, the caller might be interested in doing some error handling of its own.

15. Don't forget that error handling code that can be defective. It is important to write test cases for error handling cases that exercise it.

16. Make sure there's no code path where the same object is released more than once? Check error code paths.

17. COM Reference Counting: Frequently a reference counter is used to keep the reference count on objects (For example, COM objects). The object uses the reference counter to determine when to destroy itself. In most cases, the developer uses methods to increment or decrement the reference count. Make sure the reference count reflects the number of times an object is referred. Similarly tracing is important in code to validate the flow.

18. Thread Safety: Are all global variables thread-safe? If global variables can be accessed by more than one thread, code altering the global variable should be enclosed using a synchronization mechanism such as a mutex. Code accessing the variable should be enclosed with the same mechanism.

19. If some objects can be accessed by more than one thread, make sure member variables are protected by synchronization mechanisms.

20. It is important to release the locks in the same order they were acquired to avoid deadlock situations. Check error code paths.

21. Database Transactions: Always Commit/Rollback a transaction at the earliest possible time. Keep transactions short.

22. Make sure there's no possibility for acquiring a set of locks (mutex, semaphores, etc.) in different orders. For example, if Thread A acquires Lock #1 and then Lock #2, then Thread B shouldn't acquire Lock #2 and then Lock #1.

23. Are loop ending conditions accurate? Check all loops to make sure they iterate the right number of times. Check the condition that ends the loop; insure it will end out doing the expected number of iterations.

24. Check for code paths that can cause infinite loops? Make sure end loop conditions will be met unless otherwise documented.

25. Do recursive functions run within a reasonable amount of stack space? Recursive functions should run with a reasonable amount of stack space. Generally, it is better to code iterative functions with proper/predictable end conditions.

26. Are whole objects duplicated when only references are needed? This happens when objects are passed by value when only references are required. This also applies to algorithms that copy a lot of memory. Consider using algorithm that minimizes the number of object duplications, reducing the data that needs to be transferred in memory. Avoid entire copying objects onto the stack instead use reference objects (most of the time default in C# for large user defined objects)

27. Does the code have an impact on size, speed, or memory use? Can it be optimized? For instance, if you use data structures with a large number of occurrences, you might want to reduce the size of the structure.

28. Blocking calls: Consider using a different thread for code making a function call that blocks or use a monitor thread with a well-defined timeout

29. Is the code doing busy waits instead of using synchronization mechanisms or timer events? Doing busy waits takes up CPU time. It is a better practice to use synchronization mechanisms since they force the thread to sleep without using valuable cpu time.

30. Optimizations may often make code harder to read and more likely to contain bugs. Such optimizations should be avoided unless a need has been identified. Has the code been profiled? Check if any over optimization has led to functionality disappearing.

31. Are function parameters explicitly verified in the code? This check is encouraged for functions where you don't control the whole range of values that are sent to the function. This isn't the case for helper functions, for instance. Each function should check its parameter for minimum and maximum possible values. Each pointer or reference should be checked to see if it is null. An error or an exception should occur if a parameter is invalid.

32. Make sure an error message is displayed if an index is out-of-bound. This can happen in C# too for dynamically created lists, etc.

33. Make sure the user sees simple error messages, not technical jargon.

34. Don't return references to objects declared on the stack, return references to objects created on the heap. In C# whenever new is called a new heap object is created, but if an object is passed by copy then the stack object would disappear on return resulting in invalid data.

35. Make sure there are no code paths where variables are used prior to being initialized? If an object is used by more than one thread, make sure the object is not in use by another thread when you destroy it. If an object is created by doing a function call, make sure the object was created before using it. The VS.NET C# compiler ensures this so don’t ignore this warning.

36. Does the code re-write functionality that could be achieved by using an existing API/code? Don't reinvent the wheel. New code should use existing functionality as much as possible. Don't rewrite source code that already exists in the project. Code that is replicated in more than one function should be put in a helper function for easier maintenance. The existing code/library routines may be already optimized for this operation.

37. Bug Fix Side Effects: Does a fix made to a function change the behavior of caller functions? Sometimes code expects a function to behave incorrectly. Fixing the function can, in some cases, break the caller. If this happens, either fix the code that depends on the function, or add a comment explaining why the code can't be changed.

38. Does the bug fix correct all the occurrences of the bug? If the code you're reviewing is fixing a bug, make sure it fixes all the occurrences of the bug.

39. Is the code doing signed/unsigned conversions? Check all signed to unsigned conversions: Can sign completion cause problems? Check all unsigned to signed conversions: Can overflow occur? Test with Minimum and Maximum possible values. Sometimes a downcast from ‘long’ to ‘int’ could mean loss of data, use the larger data type preferably.

40. Ensure the developer has unit tested the code before sending it for review. All the limit and main functionality cases should have been tested.

41. As a reviewer, you should understand the code. If you don't, the review may not be complete, or the code may not be well commented.

42. Lastly, when executed, the code should do what it is supposed to.

Thanks to the authors at