Friday, December 17, 2004

Monday, December 13, 2004

UML 2.0

UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts of a system-intensive process

http://www.agilemodeling.com/essays/umlDiagrams.htm

http://www.dotnetcoders.com/web/learning/uml/default.aspx

Thanks to the authors for a nice site on UML.

Thursday, November 18, 2004

Windows/.NET Event logging (with Internationalization/parameter features in a message file)

Event logging pre-.NET
When you access the event log using the standard NT API calls, the system stores a structure that contains (amongst other things) the message ID and any replacement strings ("inserts") for the message -- but it does not store the message text itself.
Reading from the log
When you read an entry from an event log, the system reads the stored message ID and replacement strings, gets the text of the message for the current locale from a MESSAGETABLE resource contained within the file specified in the EventMessageFile key in the registry, inserts the replacement strings, and returns you the formatted string.
As well as keeping the log file small (which improves performance when accessing the event log on a remote machine), just storing the message ID and replacement strings also means that the same message can be viewed in different languages as long as the client:Has the file installed that contains that locales MESSAGETABLE The local registry has been configured to tell NT where to find it The file containing the messages only has to be installed on the machine that is doing the reading and does not have to exist on the one that is doing the writing or the one that holds the log (they can all be different machines).
Event logging with .NET
Under .NET, message sources are registered with the EventMessageFile value always set to EventLogMessages.dll, which is installed in the GAC. This file has 65,535 entries, each of which contain a single string: %1In other words, for every possible event ID the entire format string is a placeholder that takes a single replacement string -- which is always the message that you pass to EventLog.WriteEntry()

  • The main drawbacks with this approach are:
    You have the responsibility of choosing the locale that should be used to format the message before writing it to the log and so all clients have to view the message in the same language
  • The log file is larger than necessary as it has to hold the full formatted string rather than just the message ID and replacement strings
  • If you want to view the entries written to a remote log on that machine, it must have the .NET runtime installed and the EventLogMessages.dll file registered in the remote computer's GAC.

Read on for the solution class at.... http://www.codeproject.com/csharp/eventlogex.asp

Thanks to the Author for this public code.

SQL Server: @@IDENTITY deadlock problem and fix

This interesting problem occurs only when there is a call to update after the insert and the @@IDENTITY value has to be locked, so there is a deadlock trying to get a hold of this value.

CREATE TABLE [test]
(
[a] [int] IDENTITY (1, 1) NOT NULL ,
[b] [varchar] (10) NULL ,
[c] [int] NULL ,
CONSTRAINT [PK__test] PRIMARY KEY CLUSTERED ( [a] )
)
GO

Here [a] and [c] have to have the same value.

So, this programmer goes ahead and adds a trigger to do this on the insert operation.


CREATE TRIGGER test_update ON dbo.test
FOR INSERT
AS
begin
update dbo.test set c = a
end;

And, the insert statement called by two threads(client processes) simultaneously is:

insert into test(b) VALUES ('test111')

This leads to a deadlock and this "Error Message:"
"Exception Transaction Process (PID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
Rerun the transaction"

The fix:
insert into test(b,c) VALUES ('test111',@@IDENTITY)

Notes:


Friday, October 15, 2004

Cool MS SQL Server Tools

Sqldiag - Sqldiag is a utility used for report generation and collection of diagnostic information on database server / operating system configuration parameters. Sqldiag gathers the information, even if Microsoft SQL Server 2000 services are stopped. The report generated by Sqldiag contains the following information:
Complete dump of all SQL Server error logs; Registry information, related to SQL Server; SQL Server system dll versions; Output, generated by: sp_configure, sp_who, sp_lock, sp_helpdb, xp_msver and sp_helpextendedproc; Information about all system processes (master..sysprocesses); Information about all user processes and connections (including Input buffer SPIDs and Dead locks); Information about operating system parameters (including reports about: OS version, Video display, Drivers, DMA, Memory, Services, IRQ and ports, Devices, Environment and Network); Info about the last 100 user's queries. The Sqldiag utility is installed to the \Program Files\Microsoft SQL Server\MSSQL\Binn directory by default.

Profiler - Profiler is the executable for SQL Server Profiler. SQL Server Profiler is typically used for monitoring SQL Server events, such as debugging T-SQL statements and stored procedures and troubleshooting problems (by capturing them in a real-time and replaying later).

Sqlmaint - Sqlmaint is a maintenance utility. Sqlmaint performs a set of tasks, specified by the DBA, on one or more databases (for example backup databases, update statistics, rebuild indexes, DBCC check).
The Sqlmaint utility is installed to the \Program Files\Microsoft SQL Server\MSSQL\Binn directory by default

bcp - A utility used for the interactive process of bulk coping of data between the SQL Server 2000 instance and data file (format information file should be specified or a default bcp.fmt should be used instead). The bcp utility is the typical example of a "two-way" tool, i.e. copying data "into SQL Server instance" or "out of SQL Server instance" is allowed. Alternatively, bcp can be used for copying data:
Between SQL Server instances with different language collations; To or from a view; Returned from a T-SQL query (to data file); Between the Microsoft SQL Server and database servers of other vendors; Between the SQL Servers working on different processor architectures; To or from a database table (including temporary tables); Between databases within one SQL Server instance. The bcp utility is installed by default to the \Program Files\Microsoft SQL Server\80\Tools\Binn directory

itwiz - itwiz allows the Index Tuning Wizard to be executed from a command prompt. Index tuning using an itwiz is similar to tuning via Index Tuning Wizard with a user interface. The itwiz utility is installed to the \Program Files\Microsoft SQLServer\80\Tools\Binn directory by default.

osql - A utility for interactive Transact-SQL scripts and stored procedures execution. It uses ODBC libraries for communicating with the database server. Osql can be started directly from the operating system command prompt and uses a standard output device (monitor, by default) for displaying results. The osql utility is installed to the \Program Files\Microsoft SQLServer\80\Tools\Binn directory by default.

Simple Enabling/Disabling Constraints/Triggers on the entire SQL 2000 Database
sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all"
sp_msforeachtable @command1="print '?'", @command2="ALTER TABLE ? CHECK CONSTRAINT all"sp_msforeachtable @command1="print '?'", @command2="ALTER TABLE ? ENABLE TRIGGER all"

Wednesday, October 13, 2004

Simple SQL Server/MSDE Database Installation through osql

Here, a MSDE database is dropped, attached, and a user is given rights on the db
With minor changes to the osql parameters a Server name can be provided and this script will work for an SQL 2000 database.

--drop old db
osql -E -S -Q "DROP DATABASE [dbname]"

--copy the mdf to the target loc
copy "c:\installtemp\dbname*.?df" "C:\program Files\Microsoft SQL Server\MSSQL\Data"

--make sure the db file is not read only
attrib -r "C:\program Files\Microsoft SQL Server\MSSQL\Data\dbname*.?df"

--attach the db to the target instance/server
osql -E -S -Q "EXEC sp_attach_db @dbname = 'dbname', @filename1 = N'C:\Program Files\Microsoft SQL Server\MSSQL\Data\dbname_Log.LDF', @filename2 = N'C:\Program Files\Microsoft SQL Server\MSSQL\Data\dbname_Data.MDF'"

use dbname
--add a user to access this db apart from default db admin user
EXEC sp_grantdbaccess "domain\user", "domain\user"
GO

--grant read access to this user
exec sp_addrolemember N'db_datareader', "domain\user"
GO

--grant write access to this user
exec sp_addrolemember N'db_datawriter', "domain\user"
GO

--security script to make sure sps and fns have exec priv.
osql -E -S -Q -i "c:\installtemp\dbnameSecuritySetup.sql"

--Security script is below -- dbnameSecuritySetup.sql
--recurively grant exec priv to all sps and fns in db.
USE dbname
DECLARE @sExecQry sysname
DECLARE EXEC_SPS CURSOR LOCAL FOR
select 'grant exec on ' + QUOTENAME(name) + ' to "domain\user" ' from sysobjects where (type = 'P' or type='FN') and objectproperty(id,'IsMSShipped')=0
OPEN EXEC_SPS
FETCH NEXT FROM EXEC_SPS INTO @sExecQry
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC(@sExecQry)
--PRINT @sExecQry -- debug only
FETCH NEXT FROM EXEC_SPS INTO @sExecQry
END
CLOSE EXEC_SPS
GO

--Perform checks on the database
DBCC CHECKCONSTRAINTS WITH ALL_CONSTRAINTS
GO

DBCC CHECKDB
GO

DBCC CHECKALLOC
GO

DBCC CONCURRENCYVIOLATION
GO

DBCC DROPCLEANBUFFERS
GO

DBCC FREEPROCCACHE
GO

DBCC UPDATEUSAGE(0)
GO

Thursday, October 07, 2004

Migrating Oracle Databases to SQL Server 2000

SQL Server 2000 only works on Windows-based platforms, including Windows 9x, Windows NT, Windows 2000 and Windows CE.
In comparison with SQL Server 2000, Oracle 9i Database supports all known platforms, including Windows-based platforms, AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel, Sun Solaris and so on.

Sometimes there is the migration issue, so here goes...

There are some nice articles on Oracle to SQL Server migration at

http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/en-us/part2/c0761.mspx

SQL Server vs Oracle Feature differences
http://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm

There is a nice tool for the Migration of stored procs/SQL at
http://www.swissql.com/oracle-to-sql-server.html

Latest Top Ten TPC-C by PerformanceVersion 5 Results
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp

Top Ten TPC-C by Price/PerformanceVersion 5 Results
http://www.tpc.org/tpcc/results/tpcc_price_perf_results.asp

Oracle vs SQL Server
http://www.dba-oracle.com/oracle_tips_oracle_v_sql_server.htm

Thanks to the authors of these public sites for the relevant information.

Should 4+1Views based Architecture be a standard for High Level Design documents

The template and details are at:

http://www.cs.ubc.ca/~gregor/teaching/papers/4+1view-architecture.pdf

"To describe a software architecture, we use a model composed of multiple views or perspectives. In order to eventually address large and challenging architectures, the model we propose is made up of five main views

  • The logical view, which is the object model of the design (when an object-oriented design method isused),
  • the process view, which captures the concurrency, availability, performance and synchronization aspects of the design,
  • the physical view, which describes the mapping(s) of the software onto the hardware and reflects its distributed aspect,
  • the development view, which describes the static organization of the software in its development environment.
  • The description of an architecture—the decisions made—can be organized around these four views, and then illustrated by a few selected use cases, or scenarios which become a fifth view."

Thanks to the Author - Philippe Kruchten - and IEEE for this invaluable experience paper.

Tuesday, October 05, 2004

Testing SSL on Win Server 2003/IIS6

Hi there,
Been busy with lots of work with .NET Remoting Performance Testing and stuff.
Found something interesting so here goes.

There's a nice easy way to test your IIS6 - SSL performance, install the free SelfSSL Certificate(SelfSSL Version 1.0) from the IIS Resource Kit (http://www.microsoft.com/downloads/details.aspx?FamilyID=56fc92ee-a71a-4c73-b628-ade629c89499&displaylang=en) . Its very easy to use (that's what we look for right?) . Check out http://www.visualwin.com/SelfSSL/ for detailed directions on how to get your site into https (for testing only).

The following very useful (performance, analysis and deployment) tools are also available in the IIS6 Resource Kit package :

  • IISCertDeploy.vbs Version 1.0
  • Log Parser Version 2.1
  • Metabase Explorer Version 1.6
  • Permissions Verifier Version 1.0
  • Web Capacity Analysis Tool Version 5.2
Thanks to Microsoft and the authors of the http://www.visualwin.com/ site on which there is lots of other interesting info. on Win 2003 and IIS6.

Thursday, September 30, 2004

Nice article on Unit Test Patterns

Think you know all the patterns in Unit Testing, think again, here are the various Unit Testing Patterns.

Unit Testing Patterns

Pass/Fail Patterns

  • The Simple-Test Pattern
  • The Code-Path Pattern
  • The Parameter-Range Pattern
Data Driven Test Patterns
  • The Simple-Test-Data Pattern
  • The Data-Transformation-Test Pattern
Data Transaction Patterns
  • The Simple-Data-I/O Pattern
  • The Constraint-Data Pattern
  • The Rollback Pattern
Collection Management Patterns
  • The Collection-Order Pattern
  • The Enumeration Pattern
  • The Collection-Constraint Pattern
  • The Collection-Indexing Pattern
Performance Patterns
  • The Performance-Test Pattern
Process Patterns
  • The Process-Sequence Pattern
  • The Process-State Pattern
  • The Process-Rule Pattern
Simulation Patterns
  • Mock-Object Pattern
  • The Service-Simulation Pattern
  • The Bit-Error-Simulation Pattern
  • The Component-Simulation Pattern
Multithreading Patterns
  • The Signalled Pattern
  • The Deadlock-Resolution Pattern
Stress-Test Patterns
  • The Bulk-Data-Stress-Test Pattern
  • The Resource-Stress-Test Pattern
  • The Loading-Test Pattern
Presentation Layer Patterns
  • The View-State Test Pattern
  • The Model-State Test Pattern

Read on at Advanced Unit Testing: Patterns by Marc Clifton

Thanks to the author for this material on Unit Testing.

Sunday, September 26, 2004

.NET: Solution-pattern for long-running UI responsive applications

Many a time we face this problem of updating the UI while the worker/IO thread is still performing some time consuming background action, fetching results and trying to change the UI while the UI also needs to be "freeze-free" and responsive to a user. Sometimes the UI also needs to support a "Cancel/Close" operation.

There are various solutions to this problem in .NET Winforms -- the most commonly used one is the lock mechanism - so as to enable the worker thread to safely update the UI data.

But there is a much simpler scalable solution using message passing, as .NET has 2 nice features -- 1) in which a control can tell us whether the caller of a function is a UI thread(control creator thread) or not by using the InvokeRequired property. If the caller is not a UI Thread obviously we will need to make calls to the control through the Invoke method, which is a synchronous method of a control to execute a function on the thread that owns the control's underlying window handle(UI thread), and the feature 2) in which any delegate(function) can be called asynchronously [using a generated worker thread (from the .NET thread pool) to execute the function asynchronously] by using the BeginInvoke function.

The following code is a general pattern to solve the above problem based on an example in .NET guru Chris Sells book "Windows Forms Programming in C#". I'd like to thank the author for his example and insight into this problem solution.

Platforms
Tested on .NET 1.1/2.0 and Windows NT, 98, 2000, XP, 2003

Type of Sample:
Create a new WinForms Project in VS.NET.

Main Components of the example:
Drag the following components from the toolbar on to the form

ProgressBar opProgress; //a progressbar indicating job progress
Button longOpButton;//button to start/cancel the operation
TextBox resultsBox;// a text box - read only - for scrolling through results
Label maxOpsLabel;// a label to indicate the operations complete till now


and add the following constant for the num of times to repeat the operation

///


/// This is the maximum amount of times the job will execute
/// Change this if you want better control over the operation.
///

const int MaxDigits = 10000;

Enum for the Operation States

///
/// The states of the Long Operation
///

enum OpState
{
Pending, // No Long worker operation running or canceling
InProgress, // Long worker operation in progress
Canceled, // Long worker operation canceled in UI but not worker
}


OpState state = OpState.Pending; //initial state

Custom EventArgs to be passed to the ShowProgress Handler
/// 

/// class to hold custom Progress event arguments
///

class ShowProgressArgs : EventArgs
{
public string results;
public int totalDigits;
public int digitsSoFar;
//should the operation be cancelled
public bool cancel;

public ShowProgressArgs(string results, int totalDigits, int digitsSoFar)
{
this.results = results;
this.totalDigits = totalDigits;
this.digitsSoFar = digitsSoFar;
}
}

ShowProgress delegate and function used to display progress
//delegate that takes a sender and an instance on the custom arguments object.

delegate void ShowProgressHandler(object sender, ShowProgressArgs e);
/// ShowProgress makes sure the UI thread will handle UI changes(progress updates,etc)

/// If ShowProgress is called from the UI thread,
/// it will update the controls, but if it's called from a worker
/// thread, it uses BeginInvoke to call itself back on the
/// UI thread.
void ShowProgress(object sender, ShowProgressArgs e)
{
// Make sure we're on the UI thread
if( this.InvokeRequired == false )
{
resultsBox.Text = e.results;
opProgress.Maximum = e.totalDigits;
opProgress.Value = e.digitsSoFar;
this.maxOpsLabel.Text = e.digitsSoFar.ToString();

Application.DoEvents();
// Check for Cancel
e.cancel = (state == OpState.Canceled);

// Check for completion
if( e.cancel (e.digitsSoFar == e.totalDigits) )
{
state = OpState.Pending;
longOpButton.Text = "Calc";
longOpButton.Enabled = true;
}
}
// Transfer control to the UI thread
else
{
//send message to UI thread synchronously
Invoke(new ShowProgressHandler(ShowProgress), new object[] { sender, e });
}
}

PerformJob delegate and function which is the Long operation
/// 

/// delegate to call the Long operation asynchronously
///

delegate void PerformJobDelegate(int digits);
/// 

/// The heart of the long operation
/// can be any kind of worker thread intensive operation.
/// Here 9 digits are sent to the display at a time
/// till the maxdigits are reached
///

void PerformJob(int digits)
{
StringBuilder calcResult = new StringBuilder("", digits + 2);
object sender = System.Threading.Thread.CurrentThread;
ShowProgressArgs e = new ShowProgressArgs(calcResult.ToString(), digits, 0);
// Show progress
ShowProgress(sender, e);

if( digits > 0 )
{
const string nineDigitsString="123456789-";
for( int i = 0; i < digits; i += 9 )
{
calcResult.Append(nineDigitsString);

// Show progress
e.results = calcResult.ToString();
e.digitsSoFar = i + 9;
ShowProgress(sender, e);
// check for Cancel
if( e.cancel ) break;
}
}
}

The Operation Start/Cancel Button Click Event handler
/// 

/// This technique represents a message passing model.
/// This model is clear, safe, general-purpose, and scalable.
/// It's clear because it's easy to see that the worker is creating a message,
/// passing it to the UI, and then checking the message for information that may
/// have been added during the UI thread's processing of the message.
/// It's safe because the ownership of the message is never shared,
/// starting with the worker thread, moving to the UI thread, and then
/// returning to the worker thread, with no simultaneous access between
/// the two threads. It's general-purpose because if the worker or UI
/// thread needed to communicate information in addition to a cancel flag,
/// that information can be added to the ShowProgressArgs class.
/// Finally, this technique is scalable because it uses a thread pool,
/// which can handle a large number of long-running operations more
/// efficiently than naively creating a new thread for each one.
/// For long-running operations in your WinForms applications,
/// you should first consider message passing.
///

private void PerformOpbtn_Click(object sender, System.EventArgs e)
{
// Calc button does double duty as Cancel button
switch( state )
{
// Start a new Long worker operation
case OpState.Pending:
// Allow canceling
state = OpState.InProgress;
longOpButton.Text = "Cancel";

// Async delegate method
PerformJobDelegate PerformOp = new PerformJobDelegate(this.PerformJob);
//Perform the Long Operation MaxDigits times
PerformOp.BeginInvoke(MaxDigits, null, null);
break;

// Cancel a running Long worker operation
case OpState.InProgress:
state = OpState.Canceled;
longOpButton.Enabled = false;
break;

// Shouldn't be able to press Calc button while it's canceling
case OpState.Canceled:
Debug.Assert(false);
break;
}
}
Closing Notes:-
  • Please handle the Form Closing Event
    /// 
    
    /// Handle the closing event of this LongOpUI form
    ///

    private void LongOpUI_Closing(object sender, System.ComponentModel.CancelEventArgs e)
    {
    //send a cancel signal
    if (this.state == OpState.InProgress)
    this.longOpButton.PerformClick();
    }

  • Strategy for waiting till the operation completes
    //"EndInvoke does not return until the asynchronous call completes.
    
    //This is a good technique to use with file or network operations,
    //but because it blocks on EndInvoke, you should not use it from threads
    //that service the user interface.Waiting on a WaitHandle is a common thread
    //synchronization technique. You can obtain a WaitHandle using the AsyncWaitHandle
    //property of the IAsyncResult returned by BeginInvoke. The WaitHandle is signaled
    //when the asynchronous call completes, and you can wait for it by calling its WaitOne."
    //-- Source MSDN
    IAsyncResult aResult = PerformOp.BeginInvoke(MaxDigits, null, null);
    //Wait for the call to complete
    aResult.AsyncWaitHandle.WaitOne();
    callResult = PerformOp.EndInvoke(aResult);
    MessageBox.Show("Result of calling PerformJob with " + MaxDigits + " is " + callResult);
  • Polling for Asynchronous Call Completion
    You can use the IsCompleted property of the IAsyncResult returned by BeginInvoke
    
    to discover when the asynchronous call completes. You might do this when making
    the asynchronous call from a thread that services the user interface.
    Polling for completion allows the user interface thread to continue processing
    user input.
  • Need for a custom ThreadPool
    The .NET ThreadPool has a default limit of 25 threads per available processor.
    
    Change this setting in machine.config, but still your app may land in
    "threadpool starvation" issues as the threadpool is used for almost all
    async callbacks. Timer-queue timers and registered wait operations also use
    the thread pool. Their callback functions are queued to the ThreadPool.
    You can also Queue Work Items to the ThreadPool. Also ASP.NET WebRequests
    are serviced by the ThreadPool.
    So, sometimes you may need to write your own custom ThreadPool to solve
    these issues with the "free" .NET ThreadPool,
    see a sample at http://www.thecodeproject.com/csharp/SmartThreadPool.asp
    Thanks to the Author Ami Bar for that article.


Friday, September 24, 2004

NUnit - Simple attribute based Unit Testing in .NET

NUnit is a open-source unit-testing framework for all .Net languages.

NUnit has two different ways to run your tests :-

  • The console runner, nunit-console.exe, is the fastest to launch, but is not interactive.
  • The gui runner, nunit-gui.exe, is a Windows Forms application that allows you to work selectively with your tests and provides graphical feedback.

Sample:-

Here's the way to write a test for a class (Account) - AccountTest. The first method test is TransferFunds.

namespace bank

{
using NUnit.Framework;

[TestFixture]
public class AccountTest
{
[Test]
public void TransferFunds()
{
Account source = new Account();
source.Deposit(200.00F);
Account destination = new Account();
destination.Deposit(150.00F);

source.TransferFunds(destination, 100.00F);
Assert.AreEqual(250.00F, destination.Balance);
Assert.AreEqual(100.00F, source.Balance);
}
}
}
The first thing to notice about this class is that it has a [TestFixture] attribute associated with it – this is the way to indicate that the class contains test code (this attribute can be inherited). The class has to be public and there are no restrictions on its superclass. The class also has to have a default constructor.

The only method in the class – TransferFunds, has a [Test] attribute associated with it – this is an indication that it is a test method. Test methods have to return void and take no parameters. The Assert class defines a collection of methods used to check the post-conditions.

Compile and run this example. Assume that you have compiled your test code into a bank.dll. Start the NUnit Gui (the installer will have created a shortcut on your desktop and in the “Program Files” folder), after the GUI starts, select the File->Open menu item, navigate to the location of your bank.dll and select it in the “Open” dialog box. When the bank.dll is loaded you will see a test tree structure in the left panel and a collection of status panels on the right. Click the Run button, the status bar and the TransferFunds node in the test tree turn red – our test has failed.

There are other useful attributes like [Setup], [ExpectedException(typeof(InvalidOperationException))], and [Ignore("Ignore a fixture")]

More Reading on .NET code testing with NUnit at http://www.nunit.org/getStarted.html


Thanks to the authors for the above material from SourceForge/NUnit.


Thursday, September 23, 2004

CCOW

"CCOW - Clinical Context Object Workgroup - is a vendor independent standard developed by the HL7 organization to allow clinical applications to share information at the point of care. CCOW enables the visual integration of disparate healthcare applications. "

Basically a "context management", software integration application. Specifically, CCOW defines a protocol for securely linking applications so that they tune to the same context. CCOW works for both client-server and web-based applications.

This means that when a clinician signs onto one application within a CCOW environment, and selects a patient, that same sign-on is simultaneously executed on all other applications within the same environment, and the same patient is selected in all the applications, saving clinician time and improving efficiency.

BUSINESS BENEFITS?

  • Greater flexibility of choice for health providers when purchasing healthcare applications because CCOW offers widespread interoperability between software from different vendors
  • Rapid, unified access for clinicians to patient data when they need it
    CCOW's single sign-on management capabilities improve user efficiency (fewer time-consuming sign-ons to applications)
  • Context oriented workflow - clinical users can find and compare patient information they need quickly and easily, supporting better clinical decision-making
  • Leverages existing investment - By CCOW-enabling existing IT resources, health providers can realize the benefits of a single sign-on, patient centric information system without major re-investment in new technologies.

CCOW specifies that a Context Manager component is responsible for maintaining the context. Applications are Context Participants that synchronize by querying the context manager to determine the current context and when they wish to update the context. CCOW also supports Mapping Agents, which map equivalent identifiers when the context is updated so that applications can interoperate without sharing the same identification information for patients or users.

CCOW provides two options for communication between components - a Web (HTTP) mapping, and an ActiveX mapping. This allows interoperation to occur even between applications employing different technologies.

I'd like to thank authors of the HL7 web site and other web sites involved with CCOW, for the above material.

Tuesday, September 21, 2004

WMI and SNMP

Microsoft Windows Management Instrumentation (WMI) technology support for Simple Network Management Protocol (SNMP).


  • WMI is used to represent management objects in Windows-based management environments.
  • The WMI scripting interface also provides scripting support.

The WMI technology also provides:

  • Access to monitor, command, and control any managed object through a common, unifying set of interfaces, regardless of the underlying instrumentation mechanism. WMI is an access mechanism.
  • A consistent model of Windows 2000 operating system operation, configuration, and status.
  • A COM Application Programming Interface (API) that supplies a single point of access for all management information.
  • Interoperability with other Windows 2000 management services. This approach can simplify the process of creating integrated, well-architected management solutions.
  • A flexible, extensible architecture. Developers can extend the information model to cover new devices, applications, and so on, by writing code modules called WMI providers, described later in this document.
  • Extensions to the Windows Driver Model (WDM) to capture instrumentation data and events from device drivers and kernel-side components.
  • A powerful event architecture. This allows management information changes to be identified, aggregated, compared, and associated with other management information. These changes can also be forwarded to local or remote management applications.
  • A rich query language that enables detailed queries of the information model.
  • A scriptable API which developers can use to create management applications. The scripting API supports several languages, including Microsoft Visual Basic; Visual Basic for Applications (VBA); Visual Basic, Scripting Edition (VBScript); Microsoft JScript development software. Besides VBScript and JScript, developers can use any scripting language implementation that supports Microsoft ActiveX scripting technologies with this API (for example, a Perl scripting engine). Additionally, you can use the Windows Script Host or Microsoft Internet Explorer to write scripts using this interface. Windows Script Host, like Internet Explorer, serves as a controller engine of ActiveX scripting engines. Windows Script Host supports scripts written in VBScript, and JScript

The WMI technology architecture consists of the following:

  • A management infrastructure. This includes the CIM Object Manager, which provides applications with uniform access to management data and a central storage area for management data called the CIM Object Manager repository.
  • WMI Providers. These function as intermediaries between the CIM Object Manager and managed objects. Using the WMI APIs, providers supply the CIM Object Manager with data from managed objects, handle requests on behalf of management applications, and generate event notifications.

WMI ships with built-in providers (or standard providers) that supply data from sources such as the system registry. The built-in providers include:

  • Active Directory Provider: Acts as a gateway to all the information stored in the Active Directory service. Allows information from both WMI and Active Directory to be accessed using a single API.
  • Windows Installer Provider: Allows complete control of Windows Installer and installation of software through WMI. Also supplies information about any application installed with Windows Installer.
  • Performance Counter Provider: Exposes the raw performance counter information used to compute the performance values shown in the System Monitor tool. Any performance counters installed on a system will automatically be visible through this provider. Supported by Windows 2000.
  • Registry Provider: Allows Registry keys to be created, read, and written. WMI events can be generated when specified Registry keys are modified
  • SNMP Provider: Acts as a gateway to systems and devices that use the Simple Network Management Protocol (SNMP) for management. SNMP MIB object variables can be read and written. SNMP traps can be automatically mapped to WMI events. SNMP provider snmpincl.dll root\snmp Provides access to SNMP MIB data and traps from SNMP-managed devices.
  • Event Log Provider: Provides access to data and event notifications from the Windows 2000 Event Log.
  • Win32 Provider: Provides information about the operating system, computer system, peripheral devices, file systems and security information.
  • WDM Provider: Supplies low level Windows Driver Model driver information for user input devices, storage devices, network interfaces, and communications ports.
  • View Provider: Allows new aggregated classes to be built up from existing classes. Source classes can be filtered for only the information of interest, information from multiple classes can be combined into a single class and data from multiple machines can be aggregated into a single view.

Simple Network Management Protocol (SNMP) is a network management standard that defines a strategy for managing TCP/IP and, more recently, Internet Packet Exchange (IPX) networks.

SNMP uses a distributed architecture that includes:

  • Multiple managed nodes, each with an SNMP entity called an agent which provides remote access to management instrumentation.
  • At least one SNMP entity referred to as a manager which runs management applications to monitor and control managed elements. Managed elements are devices such as hosts, routers, and so on; they are monitored and controlled by accessing their management information.
  • A management protocol, SNMP, is used to convey management information between the management stations and agents. Management information refers to a collection of managed objects that reside in a virtual information store called a Management Information Base (MIB). A MIB thus contains the information requested by the management system.
  • To communicate host information, management systems and agents use SNMP messages. These messages are sent using the User Datagram Protocol (UDP) and are routed between the management system and host by using the Internet Protocol (IP).

Processing Information Requests

  • When a management system requests information, the following sequence occurs:
  • A management system sends a request to an agent using the agent's IP or IPX address.
  • The agent forms an SNMP datagram that contains an SNMP message and the community name to which the management system belongs.
  • The SNMP agent receives the datagram and confirms the community name. If the community name is valid, the SNMP agent retrieves the appropriate data. Otherwise, if the community name is invalid, the request is rejected. If the agent has been configured to send an authentication trap, a trap message is sent.
  • The SNMP datagram is returned to the management system with the requested information.

SNMP Messages

The following SNMP message types are used:

  • Get This is a request message. SNMP management systems use Get messages to request information about a MIB entry on an SNMP agent.
  • Get-Next A type of request message that can be used to browse an entire tree of managed objects.
  • GetBulk A type of request that specifies that the agent transfer as much data as possible, within the limits of message size.
  • Set This is used to send and assign an updated MIB value to an agent.
  • Notification (or Trap) This is an unsolicited message that an agent sends to a SNMP management system when it detects a certain type of event has occurred locally on the managed host. Traps do not required acknowledgements.
  • Inform SNMP Managers can communicate with each other using Inform Requests that require acknowledgements.

WMI SDK support for SNMP

  • The SNMP Provider includes the following components:
    Class, instance, and event Providers that integrate the SNMP information modeling and processing into WMI. These SNMP providers map collections of object values to property values of CIM class instances.
  • An SNMP information module compiler that compiles native SNMP schema information into the format that CIM uses.

Mapping Device Data to CIM Classes
The SNMP Providers map device data to CIM classes through the following methods::

  • Enumerating SNMP Class Definitions. To enumerate a set of class definitions, applications can call IWbemServices::CreateClassEnum or IWbemServices::CreateClassEnumAsync.
    MIB objects are mapped to SNMP CIM classes using the OBJECT-TYPE macro; events are mapped to classes using the TRAP-TYPE and NOTIFICATION-TYPE macros.
    The OBJECT-TYPE macro is used to describe the basic characteristics of a MIB object. The SNMPv1 TRAP-TYPE and SNMPv2C NOTIFICATION-TYPE macros describe the characteristics of an SNMP event.
  • Instantiating SNMP Class Definitions. To instantiate a class definition, applications can call IWbemServices::GetObject or IWbemServices::GetObjectAsync.
  • Enumerating SNMP Class Instances. The SNMP instance Provider services requests to enumerate instances associated with classes that represent device MIBs.
  • Instantiating SNMP Class Instances. The SNMP instance Provider processes requests to instantiate instances of classes that represent MIB objects.
  • Retrieving SNMP Class Instances. To retrieve a particular instance of a SNMP CIM class, applications can call IWbemServices::GetObject or IWbemServices::GetObjectAsync.

SNMP and the CIM Schema
The schema that SNMP uses to define objects differs from that used in the WMI Common Information Model. The SNMPv1 and SNMPv2 schema is called the Structure of Management Information (SMI); it is packaged as MIB files. To define objects, the MIB files use Abstract Syntax Notation 1 (ASN.1), a standard language, and macro definitions that are used as templates for describing the objects. These macros supply information about the object, including its name, identifier, syntax, description, access rights, and so on.

This summary and below examples have been gathered from various sites including MSDN(Microsoft), needless to say I thank the authors for this public information.

Sample Code
  • Read from an SNMP device. The following Visual Basic script example performs a Get operation on a device class.
Set objLocator = CreateObject("wbemscripting.swbemlocator")

Set objServices = objLocator.ConnectServer(, "root\snmp\mngd_hub")
objServices.security_.privileges.AddAsString("SeSecurityPrivilege")
Set objSet = objServices.ExecQuery _
("SELECT * FROM SNMP_NET_DEVICE_123 WHERE hdwr_idx>1",, _
wbemFlagReturnWhenComplete)
for each obj in objset
'do whatever
next

  • Write to an SNMP device. The following script example performs a Set operation on a device class.

Set objLocator = CreateObject("wbemscripting.swbemlocator")

Set objServices = objLocator.ConnectServer(, "root\snmp\mngd_hub")
objServices.security_.privileges.AddAsString("SeSecurityPrivilege")
Set obj= objServices.Get("SNMP_NET_DEVICE_123=@")
obj.deviceLocation = "40/5073"
obj.put_

DataSets -- Performance Optimization with Remoting

http://msdn.microsoft.com/msdnmag/issues/04/10/CuttingEdge/default.aspx

Monday, September 20, 2004

What is DICOM and Why DICOM

      DICOM is Digital Imaging and Communications in Medicine and is a standard mainly used to distribute and view medical image files such as X-Rays, CT scans, MRIs, and ultrasound images.
      We are all used to the XRay film sheets we get from a hospital when we go in for a scan. Very cumbesome, difficult to archive, there are hospitals with librarians just to catalog and maintain these films for insurance and medical standards compliance purposes.
Now as we move to a digital world, we have softwares capable of viewing these images online and doctors remotely viewing/annotating these images (as reference material) and patients carrying home a/some digital picture(s)/movie(s) of the scan they had for insurance purposes, etc. All this is possible by the universal DICOM standard.
      Many companies have their own custom additions to the DICOM standards so two DICOM files from two vendors need not have the same contents even if they were from the same patient and output from the same medical device with the same resolution, settings etc. This is also part of DICOM flexibility, that private attributes may be added to facilitate a vendors custom needs to enhance the customer experience with the vendors software, etc.
      I leave you with lots of links to DICOM standards docs, some freeware to process these DICOM image files, and of course sample DICOM Image links, needless to say, thanking the authors for their public sites.


DICOM introduction and free software


Medical Imaging: Samples

Thursday, September 16, 2004

.NET: Using WMI to get MACAddress of a machine







using System;
using System.Management;
namespace GetMACAddress
{
class Class1
{
[STAThread]
static void Main(string[] args)
{
ManagementObjectSearcher query = null;
ManagementObjectCollection queryCollection = null;
try
{
query = new ManagementObjectSearcher(new ObjectQuery
("Select MacAddress,IPAddress from Win32_NetworkAdapterConfiguration where
IPEnabled=TRUE")) ;
queryCollection = query.Get();
foreach( ManagementObject mo in queryCollection )
{
if(mo["IPAddress"] != null && mo
["MACAddress"] != null)
{
Console.WriteLine("IPAddress : " +
((String[])mo["IPAddress"])[0]);
Console.WriteLine("MACAddress : " +
mo["MACAddress"].ToString());
}
}
}
catch(Exception ex)
{
Console.WriteLine(ex.Source);
Console.WriteLine(ex.Message);
}
}
}
}

Changing a SQL Server 'server' name after Computer Name has changed


sp_dropserver 'old SQL Server server name'
GO

sp_addserver 'new computer name', 'local'
GO

Stop and Restart SQL Server service

Now Run
SELECT @@SERVERNAME
to verify the changes

Performance Tools

PERFORMANCE MONITOR -- Counters; Understand thresholds
CLRPROFILER -- Allocations; Survivors; Leaking (Another Profiler is from DevPartner)
WINDBG -- Dumps; Hangs, Crashes, Blocks, Memory, etc
VADUMP -- Working set; Memory, etc
NETMON -- Data on Wire; Bandwidth and Latency


Performance: Calling Unmanaged Code

-- PInvoke is a fast way to invoke unmanaged code

-- The CLR does expensive CAS security stack walks on every call into that method to insure that all callers have unmanaged code access permissions. For non-security sensitive scenarios, disable the security check for better performance.
// Use only when security is not a major concern
[DllImport("kernel32.dll"), SuppressUnmanagedCodeSecurity]
public static extern bool Beep(int frequency, int duration);

*TLBIMP generates interop assemblies
*Disable the CAS stack-walks that by building interop assemblies with the TLBIMP /unsafe switch
*Forces TLBIMP to generate code that create RCWs that perform link demands rather than full demands
*Use with caution! Can open the door to luring attacks

C:\>tlbimp mycomponent.dll /out:UnSafe_MyComponent.dll /unsafe

Securing SQL Server 2000 database and datafiles

Restrict physical access to the SQL Server computer. Always lock the server while not in use.
Make sure, all the file and disk shares on the SQL Server computer are read-only. In case you have read-write shares, make sure only the right people have access to those shares.
Use the NTFS file system as it provides advanced security and recovery features.
Prefer Windows authentication to mixed mode. If mixed mode authentication is inevitable, for backward compatibility reasons, make sure you have complex passwords for sa and all other SQL Server logins. It is recommended to have mixed case passwords with a few numbers and/or special characters, to counter the dictionary based password guessing tools and user identity spoofing by hackers.
Rename the Windows NT/2000 Administrator account on the SQL Server computer to discourage hackers from guessing the administrator password.
In a website environment, keep your databases on a different computer than the one running the web service. In other words, keep your SQL Server off the Internet, for security reasons.
Keep yourself up-to-date with the information on latest service packs and security patches released by Microsoft. Carefully evaluate the service packs and patches before applying them on the production SQL Server. Bookmark this page for the latest in the security area from Microsoft: http://www.microsoft.com/security/
If it is appropriate for your environment, hide the SQL Server service from appearing in the server enumeration box in Query Analyzer, using the /HIDDEN:YES switch of NET CONFIG SERVER command.
Enable login auditing at the Operating System and SQL Server level. Examine the audit for login failure events and look for trends to detect any possible intrusion.
If it fits your budget, use Intrusion Detection Systems (IDS), especially on high-risk online database servers. IDS can constantly analyze the inbound network traffic, look for trends and detect Denial of Service (DoS) attacks and port scans. IDS can be configured to alert the administrators upon detecting a particular trend.
Disable guest user account of Windows. Drop guest user from production databases using sp_dropuser
Do not let your applications query and manipulate your database directly using SELECT/INSERT/UPDATE/DELETE statements. Wrap these commands within stored procedures and let your applications call these stored procedures. This helps centralize business logic within the database, at the same time hides the internal database structure from client applications.
Let your users query views instead of giving them access to the underlying base tables.
Discourage applications from executing dynamic SQL statements. To execute a dynamic SQL statement, users need explicit permissions on the underlying tables. This defeats the purpose of restricting access to base tables using stored procedures and views.
Don't let applications accept SQL commands from users and execute them against the database. This could be dangerous (known as SQL injection), as a skilled user can input commands that can destroy the data or gain unauthorized access to sensitive information.
Take advantage of the fixed server and database roles by assigning users to the appropriate roles. You could also create custom database roles that suit your needs.
Carefully choose the members of the sysadmin role, as the members of the sysadmin role can do anything in the SQL Server. Note that, by default, the Windows NT/2000 local administrators group is a part of the sysadmin fixed server role.
Constantly monitor error logs and event logs for security related alerts and errors.
SQL Server error logs can reveal a great deal of information about your server. So, secure your error logs by using NTFS permissions.
Secure your registry by restricting access to the SQL Server specific registry keys like HKEY_LOCAL_MACHINE\Software\Microsoft\MSSQLServer.
If your databases contain sensitive information, consider encrypting the sensitive pieces (like credit card numbers and Social Security Numbers (SSN)). There are undocumented encryption functions in SQL Server, but I wouldn't recommend those. If you have the right skills available in your organization, develop your own encryption/decryption modules using Crypto API or other encryption libraries.
If you are running SQL Server 7.0, you could use the encryption capabilities of the Multi-Protocol net library for encrypted data exchange between the client and SQL Server. SQL Server 2000 supports encryption over all protocols using Secure Socket Layer (SSL). See SQL Server 7.0 and 2000 Books Online (BOL) for more information on this topic. Please note that, enabling encryption is always a tradeoff between security and performance, because of the additional overhead of encryption and decryption.
Prevent unauthorized access to linked servers by deleting the linked server entries that are no longer needed. Pay special attention to the login mapping between the local and remote servers. Use logins with the bare minimum privileges for configuring linked servers.
DBAs generally tend to run SQL Server service using a domain administrator account. That is asking for trouble. A malicious SQL Server user could take advantage of these domain admin privileges. Most of the times, a local administrator account would be more than enough for SQL Server service.
DBAs also tend to drop system stored procedures like xp_cmdshell and all the OLE automation stored procedures (sp_OACreate and the likes). Instead of dropping these procedures, deny EXECUTE permission on them to specific users/roles. Dropping these procedures would break some of the SQL Server functionality.
Be prompt in dropping the SQL Server logins of employees leaving the organization. Especially, in the case of a layoff, drop the logins of those poor souls ASAP as they could do anything to your data out of frustration.
When using mixed mode authentication, consider customizing the system stored procedure sp_password, to prevent users from using simple and easy-to-guess passwords.
To setup secure data replication over Internet or Wide Area Networks (WAN), implement Virtual Private Networks (VPN) . Securing the snapshot folder is important too, as the snapshot agent exports data and object scripts from published databases to this folder in the form of text files. Only the replication agents should have access to the snapshot folder.
It is good to have a tool like Lumigent Log Explorer handy, for a closer look at the transaction log to see who is doing what in the database.
Do not save passwords in your .udf files, as the password gets stored in clear text.
If your database code is proprietary, encrypt the definition of stored procedures, triggers, views and user defined functions using the WITH ENCRYPTION clause. dbLockdown is a tool that automates the insertion of the WITH ENCRYPTION clause and handles all the archiving of encrypted database objects so that they can be restored again in a single click. Click here to find out more information about this product.
In database development environments, use a source code control system like Visual Source Safe (VSS) or Rational Clear Case. Control access to source code by creating users in VSS and giving permissions by project. Reserve the 'destroy permanently' permission for VSS administrator only. After project completion, lock your VSS database or leave your developers with just read-only access.
Store the data files generated by DTS or BCP in a secure folder/share and delete these files once you are done.
Install anti-virus software on the SQL Server computer, but exclude your database folders from regular scans. Keep your anti-virus signature files up to date.
SQL Server 2000 allows you to specify a password for backups. If a backup is created with a password, you must provide that password to restore from that backup. This discourages unauthorized access to backup files.
Windows 2000 introduced Encrypted File System (EFS) that allows you to encrypt individual files and folders on an NTFS partition. Use this feature to encrypt your SQL Server database files. You must encrypt the files using the service account of SQL Server. When you want to change the service account of SQL Server, you must decrypt the files, change the service account and encrypt the files again with the new service account.

SET NOCOUNT ON at the beginning of every SQL stored procedure

Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like '(1 row(s) affected)' after executing INSERT, UPDATE, DELETE and SELECT statements.

This inturn improves the performance of the stored procedures by reducing the network traffic.

.NET Performance Comparisons and Techniques

Performance Comparison: Data Access Techniques OR MSDN Search: Performance Comparison: Data Access Techniques
DataReader is the best when Forward Read-Only access to data is required.
DataSets are useful when you need the data, schema and maybe updateable options.
The new DataSet is .NET 2.0 also does away with the problem of using a DataSetSurrogate in .NET 1.1 as 1.1 datasets are transmitted in xml across remoting boundaries which is costly for a binary remoting approach.

Performance Comparison: .NET Remoting vs. ASP.NET Web Services
If your application needs interoperability with other platforms or operating systems, you would be better off using ASP.NET Web services, as they are more flexible in that they support SOAP section 5 and Document/Literal. On the other hand, use .NET Remoting when you need the richer object-oriented programming model. See ASP.NET Web Services or .NET Remoting: How to Choose for details. In scenarios where performance is the main requirement with security and process lifecycle management is not a major concern, .NET Remoting TCP/Binary is a viable option; however, keep in mind that you can increase the performance of IIS-hosted implementations by adding a few more machines into the system, which may not be possible when using a .NET Remoting TCP/Binary implementation.

Performance Comparison: Encryption Techniques -- Security Choices
When designing a secure system, the implementation techniques should be chosen based on threat mitigation first and performance second. For instance, basic authentication without SSL could be used for better performance, but no matter how fast it is, it would not be useful in systems that are vulnerable to threats not mitigated by it.

Performance Comparison: Transaction Control
Running a database transaction implemented in a stored procedure offers the best performance because it needs only a single round trip to the database

Improving ASP.NET Performance
Application_BeginRequest:
This event is used when a client makes a request for any ASP.NET web page/handler. It can be useful for redirecting or validating a page request.
So, the ValidateToken call should be done here. Since the handler (s) will have similar headers this validation logic would be moved to a common area, called on every postback.
Application_Error:
This event is used to handle all unhandled exceptions for any ASP.NET web page/handler. So, all ASP.NET global web errors would have to be trapped here. Alwways, implement a Global.asax error handler. Monitor application exceptions. Use try/finally on disposable resources. Write code that avoids exceptions.
Application_AuthenticateRequest:This event occurs when the identity of the current user has been established as valid by the security module(NT/Forms authentication) and is available for custom validation, called on every postback.
Exception Logging:
This event is used when a client makes a request for any ASP.NET web page/handler. It can be useful for redirecting or validating a page request.
So, the ValidateToken call should be done here. Since the handler (s) will have similar headers this validation logic would be moved to a common area.
Response.Flush and Buffer=true behaviour
Microsoft admits that the response.flush waits for the client browser to acknowledge the flush before continuing to process the ASP code. One a very fast internet connection, you don't notice the problem, but a slower modem takes longer to perform that task, and therefore a flush can be bad. So use response.buffer=true, of course but remove or restrict all response.flush commands from the site, and see an instant and dramatic speed improvement for modem connected clients. The acknowledgements from the client occur after the client receives all the content from the server on a flush.
Do not generate user interfaces within global.asax – No Response.Write()'s
Suppress the internal call to Response.End.
The Server.Transfer, Response.Redirect, Response.End methods all raise exceptions. Each of these methods internally call Response.End. The call to Response.End, in turn, causes a ThreadAbortException exception. If you use Response.Redirect, consider using the overloaded method and passing false as the second parameter to suppress the internal call to Response.End.
Inefficient rendering. Interspersing HTML and server code, performing unnecessary initialization code on page postback, and late-bound data binding may all cause significant rendering overhead. This may decrease the perceived and true page performance.
Avoid showing too much exception detail to users. Avoid displaying detailed exception information to users, to help maintain security and to reduce the amount of data that is sent to the client.
The following guidelines relate to the development of individual .aspx and .ascx Web page files.
· Trim your page size. (Reduce/Disable ViewState usage, use a common css file, use use script language="jscript" src="scripts\myscript.js", remove extra white spaces in tables etc eg.helloworld, reduce control id names like dataGrid etc)
· Enable buffering. in a page or in web.config
· Use Page.IsPostBack to minimize redundant processing.
· Partition page content to improve caching efficiency and reduce rendering.
· Ensure pages are batch compiled.
· Ensure debug is set to false. in a page or in web.config
· Optimize expensive loops. Use For instead of ForEach in performance-critical code paths.
· Consider using Server.Transfer instead of Response.Redirect.
· Use client-side validation. reduce the round trips
Avoid Using Page.DataBind Calling Page.DataBind invokes the page-level method. The page-level method in turn calls the
DataBind method of every control on the page that supports data binding. Instead of calling the page-level DataBind, call
DataBind on specific controls.
The following line calls the page level DataBind. The page level DataBind in turn recursively calls DataBind on each control.
Avoid DataBind();
The following line calls DataBind on the specific control.
yourServerControl.DataBind();
Minimize Calls to DataBinder.Eval
The DataBinder.Eval method uses reflection to evaluate the arguments that are passed in and to return the results. If you have a table that has 100 rows and 10 columns, you call DataBinder.Eval 1,000 times if you use DataBinder.Eval on each column
Use explicit casting. Using explicit casting (eg. in templates) offers better performance by avoiding the cost of reflection. eg. ((DataRowView)Container.DataItem)["field1"]
Use the ItemDataBound event. If the record that is being data bound contains many fields, it may be more efficient to use the ItemDataBound event. By using this event, you only perform the type conversion once.eg. protected void Repeater_ItemDataBound(Object sender, RepeaterItemEventArgs e) { ... }
State Management
· Application state. Application state is used for storing application-wide state for all clients. Using application state affects scalability because it causes server affinity.
Application["YourGlobalState"] = somevalue;
Use static properties instead of the Application object to store application state. Use application state to share static, read-only data.
· Session state. Session state is used for storing per-user state on the server. The state information is tracked by using a session cookie or a mangled URL. ASP.NET session state scales across Web servers in a farm. (options:InProc, StateService, SQL Server). Disable if not needed in web.config or in page
· View state. View state is used for storing per-page state information. The state flows with every HTTP POST request and response. Think of serializatrion costs. Disable if not needed in web.config or in page
· Alternatives. Other techniques for state management include client cookies, query strings, and hidden form fields.
PS: Store simple state on the client where possible. Consider serialization costs.
Short Circuit the HTTP Pipeline
The HTTP pipeline sequence is determined by settings in the Machine.config file. Put the modules that you do not use inside comments. For example, if you do not
use Forms authentication, you should explicitly remove the entry in your Web.config file for a particular application using remove tag.
Disable tracing (trace tag) and debugging (compilation tag) in the Web.config files.(during Deployment)
String Management
Use Response.Write for formatting output. Use StringBuilder for temporary buffers. Use HtmlTextWriter when building custom controls. Avoid strings for concatenation, using StringBuilder
Data Access
· Use paging for large result sets.
· Use a DataReader for fast and efficient data binding.
· Prevent users from requesting too much data.
· Consider caching data. (Invalidate cache when appropriate)
HttpModule—the Filter for all Requests The HttpModule does not replace the target of a request, however the HttpModule receives notification at various processing points during the lifespan of a request. Since (as is the case with HttpHandlers) we can map an HttpModule to all application request pages we can use an HttpModule as the foundation for a web application controller.The HttpModule class is accessible via any thread which happens to be serving the current request. Since access to the HttpModule doesn't require a specific thread, the single class instance doesn't represent a bottleneck.
IHttpHandler.IsReusable()
By calling IsReusable, an HTTP factory can query a handler to determine whether the same instance can be used to service multiple requests. [ PS: ASP.NET uses a pool of handlers to service requests. ]

Advantages in using stored procedures(Database)

Yukon is moving towards a subset of the .NET runtime engine inside the SQL Server Database.

So, what's the history behind this move? Stored Procedures -- the pre-cursors -- business logic being embedded in the database is a need -- Why? -- see below.

There are many advantages in using stored procedures, including the following:

· They are typically the most efficient way to access the database.

· They can reduce network round trips.

· They are easy to change in deployed applications.

· They make it is easier to tune the performance of your data access code.

· They provide a better way to give out permissions rather than to give permissions out for base tables.

· They allow the database schema (and schema changes) to be hidden.

· They help to centralize all the data access code.

· If all database access is done through your stored procedures, you get a controlled set of entry points.

· Auditing is easily solved.

· Disconnected pessimistic locking is easily solved.

· Data-close business rules can be put in stored procedures.

· There is no need for triggers, which is good for debuggability and maintainability.

Calling a .NET object from C++

1. Follow the rules in
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconcominteropsamplecomclientnetserver.asp

2. For the .NET interface the following attributes are necessary
[ComVisible(true)] [Guid("e15af71b-9860-36e5-af62-9f405c231daa")] public interface ILoan
{
}

3. The interface assembly should be strong named and registered in the GAC
[assembly:AssemblyKeyFile(@"..\..\..\sample.snk")]
Use:
gacutil -i NETInt.dll

4. For the .NET class the following attributes are necessary [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] [Guid("18fe2e0a-1130-388b-9f82-171909823cc2")] public class Loan : ILoan
{
}

5. The .NET class assembly should be strong named and registered in the GAC[assembly:AssemblyKeyFile(@"..\..\..\sample.snk")]Use:gacutil -i NETclass.dll

6. The .cpp client should
#import "LoanLib\LoanLib.tlb" raw_interfaces_only, no_namespace

7. Two methods of smart ptr creating the .NET object
// Method #1: Declaring and instantiating a Loan object _LoanPtr pILoan( __uuidof(Loan ) );
OR
// Method #2: Declaring and instantiating a Loan object _LoanPtr pILoan = NULL; HRESULT hr = S_OK;
hr = pILoan.CreateInstance( __uuidof(Loan ) );

.NET: Using AD to change a users Password and get Password expiry date

1. Add reference to ActiveDS.tlb (still most AD interfaces are in COM)

2. Following is code from a lot of online sites merged/tested together for a complete working solution.

Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication

http://www.15seconds.com/issue/020730.htm

http://directoryprogramming.net/forums/thread/1531.aspx

Use RoleManager for Windows Authentication in ASP.NET 2.0

// Set the search string along with LDAP path. The search is on username.

// You shouldn't bind as the user whose password you want to change, hence use impersonatable-known-user

string path =
"LDAP://"+"ldapserver"+"/CN="+"username-to-test"+"," + "CN=Users,DC=domain1,DC=main-domain";
try
{
// Create a 'DirectoryEntry' object to search
DirectoryEntry entry = new DirectoryEntry(path, "impersonatable-known-user", "impersonate-known-password",
AuthenticationTypes.ServerBind);
//OR DirectoryEntry entry = new DirectoryEntry(path, domain + "\\" +
impersonatable-known-user, impersonate-known-password,
//AuthenticationTypes.Secure);

Object obj = null;
try
{
//Bind to the native AdsObject to force authentication.
Object obj = entry.NativeObject;
}
catch (Exception ex)
{
throw new Exception("Error authenticating user. " + ex.Message);
}

if (!obj)
{
//user is not authenicated
return false;
}
//user is authenicated

// Create the Directory search instance.
DirectorySearcher search = new DirectorySearcher(entry);

//maybe this line is better --> DirectoryEntry result = new DirectoryEntry(path,
// null, null, AuthenticationTypes.None) 'Bind using existing, open connection

// Get the first search result - search is on username.
SearchResult result = search.FindOne();
// If the username has been found in the LDAP server.

if(null != result)
{
// The result obtained will look like the following:
// CN=group1,CN=group2,DC=domain1,DC=main-domain
// Get the value of 'memberof' from the properties collection
//MSDN Sample: 'System.DirectoryServices.SearchResult'
if (result.Properties.Contains("memberof"))
{
.....

ASP.NET programs that use Impersonation may not function properly on a Win 2K SP4 Server - Domain Controller

Service Pack 4 (SP4) on a Windows 2000 domain controller does not grant the IWAM account name SeImpersonatePrivilege; programs that use impersonation may not function properly.
Solution:

Click on the following from Control Panel on the Win 2K SP4 Server - Domain Controller

Administrative Tools -> Domain Controller Security Policy -> Security Settings -> Local Policies -> User Rights Assignment

"Impersonate a Client after Authentication"

Click Add (button) -> Browse (button)
In the Select Users or Groups dialog, select the IWAM account name and click Add.
To apply the policy type the following at a CMD.EXE prompt:
secedit /refreshpolicy machine_policy /enforceIn the CMD.EXE prompt, re-start IIS by typing iisreset

Running with .net 2.0 and .net 1.1 simultaneously

Make sure your run aspnet_regiis -r from the appropriate .net framework dir

For other than ASP.NET issues refer to the registry setting fix of MS OnlyUseLatestCLR = 0 or 1

You can activate this switch either by setting a registry key or by setting an environment variable:
Activate/Deactivate the Switch Using a Registry Key
Turn on the switch using the following registry setting:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\OnlyUseLatestCLR=dword:00000001 Turn off the switch using the followikng registry setting:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\OnlyUseLatestCLR=dword:00000000 Activate/Deactivate the Switch Using an Environment Variable
Activate the switch using the following variable set to "1", as follows: COMPLUS_OnlyUseLatestCLR=1
Deactivate the switch using the following variable set to "0", as follows: COMPLUS_OnlyUseLatestCLR=0
You can find a sample using the registry to activate/deactivate the switch posted on GotDotNet, at the following URL: http://www.gotdotnet.com/Community/UserSamples/Details.aspx?SampleGuid=4caff66c-df51-40ab-bd88-090d34e77520
and reboot

Handling Unhandled Exceptions in .NET

Depending on the type of application you are creating, .NET has three different global exception handlers.
For ASP.NET look at:System.Web.HttpApplication.Error eventNormally placed in your Global.asax file.
For console applications look at:System.AppDomain.UnhandledException eventUse AddHandler in your Sub Main.
For Windows Forms look at:System.Windows.Forms.Application.ThreadException eventUse AddHandler in your Sub Main.
This is called Vectored Exception handling.

It can be beneficial to combine the above global handlers in your app, as well as wrap your Sub Main in a try catch itself.
There is an article in the June 2004 MSDN Magazine that shows how toimplement the global exception handling in .NET that explains why & when youuse multiple of the above handlers...
http://msdn.microsoft.com/msdnmag/issues/04/06/NET/default.aspx

Example: In a Win Forms App, have a handler attached to the Application.ThreadException event, plus a Try/Catch in my Main. The Try/Catch in Main only catches exceptions if the constructor of the MainFormraises an exception, the Application.ThreadException handler will catch all uncaught exceptions from any form/control event handlers.