Check .NET Application Performance Using Performance Optimization

Introduction

Performance optimization is very subjective and it depends upon multiple factors like application architecture, design and way of their implementation (coding). Before taking any action, first need to check area of improvements and only then change can be done based upon their requirement. That’s why I decided to explain this topic in series.

    • Part 1: Check existing .NET application performance by using Web Performance Test.

 

 

  • Part 3: Performance optimization solutions.

Before start performance check for any application, need to understand some basic requirements. It always depends on business requirement that what is client benchmark, it is called NFR (Non Functional Requirement) requirements. There can be scenarios where client is saying “application performance is very poor” which itself a huge statement and as developer they can’t easily understand that. Because as developer they want some figures like what is current load that application can handle and what is expected load that client want to meet. Based upon that measurements they can start work and take actions appropriately.

Background

Now question comes, how to check current load of exiting application which I will explain step by step.

Application performance can be measured based upon the following four factors.

  • Response time: Time duration between users send request and system display response means time difference between HTTPRequest send and HTTPResponse received at client end.
  • Throughput: Total number of request that a server can handle per second. E.g. 1000 transaction can handle per second.
  • Resource Utilization: Resource utilization cost calculated based upon server and Network resources. Resources that consume during request processing are:
    • CPU
    • Memory
    • Disk I/O
    • Network I/O
  • Workload: How much user load that application can handle on server. There can be two types of user load simultaneous users or concurrent users.
    • Simultaneous users: Have active connections to the same Web site.
    • Concurrent users: Hit the site at exactly the same moment.

To check this practically, I used Visual Studio Performance Test against my application and verify the results as it is easy for any .NET developer. You can use any other tool like Load Runner, jMiter, etc.

Create Performance Test

Firstly, I will explain how to check performance for web based applications step by step.

Step 1: Copy web site URL that need to be tested and that can be your local machine url e.g.http://localhost:16260/Account/Login. Note down all the steps and input data that need to capture during performance test.

Step 2: Open Visual Studio and create new project of type Web Performance and Load Test Projectas in the following:

Web Performance and Load Test

Step 3: Open .webtest file and select Add Recording. Same can be done on right click webtest file and select Add Recording.

Add Recording

It will open Internet Explorer and there will be recording toolbar on left hand side.

Record

Step 4: Click Record button and paste required URL in browser and run web application and execute required functionality. It will start recording all HTTPRequest and HTTPResponse details including dynamic data (input fields). After recoding completion you will see Web performance test in the Web Performance Test Editor as a list of URLs. Here editing can be done.

Web performance

Edit Performance Test

Step 1: Open .webtest file and there will be list of requested URL (called request tree).

Step 2: Select any URL and check their properties (as shown in above image). It shows properties related to HTTPRequest and HTTPResponse. Notice that Think Time for this request is a number greater than 0. This is how many seconds you take before sending second request. Means you send first request at 01:01:10 and second request at 01:01:20 then its value will be 10 seconds. In above image, you can see it request for login page and after successful authentication it route to “Admin/Default.aspx”.

Step 3: To change Think Time value, click Set Request Details and it will open window for request and set values 1 or whatever you want to put as per your requirement.

Set Request Details

Here you can set the following values:

  • Reporting names: Reporting names make it easier to identify specific Web requests in the Web Performance Test Results Viewer and when you create Excel reports.
  • Think times: Artificial human delay times between Web requests.
  • Response time goals: Seconds you want to set as a goal for the response time on a Web request. This is very important to you want to check whether your page process successfully or not in specified time duration.

Step 4: Select any URL and expand Form Post Parameters. Here you can provide requested parameter that you want to provide and also URL Expected HTTP Status Code property value. Like if you think there should be exception from server side based upon particular parameter then provide server response to 500 or any other HTTP Response value.

Apart from that you can also specify Validation and Extraction rules like Web application is working correctly by validating that text, tags, or attributes exist on the page that is returned by a Web request. But that is not our scope as we need to concentrate only on application performance not application functionality validation.

Next when test run it will use these performance parameters to all web requests.

Run and analyze test results

Step 1: Web Performance Test Editor, choose Run Test (as shown in above image) on the toolbar. Here you will see progress all the web requests that you have done during recordings with updated parameter settings.

Step 2: In Test Results Viewer window, If it is green then all things gone fine and as per your expected results and if it red then there is some issue with result and it fails your result expectations.

Test Results

Step 3: Now change Response Goal more realistic value and see where your page able to achieve that target or not. Here we set Response Goal to 1 sec.

change Response Goal

Step 4: Run the performance test again and see the results. Here you will see it failed and to see error details, Select test case in Test Results Viewer window and right click then select View Test Results Details. Here you can see detail description about error like where exactly issue occur and what the reason is for the same.

View Test Results Details

In this article, you learned how you can check performance for any web page. You can create multiple Web Performance test cases for different module like Customer, Admin and Manager Roles. Because during load test there will be different requirement for each type of module e.g. for eCommerce site Customer load will be more than Admin and Manager. So performance for modules which are mostly used by end user like Item Search, Add to Card, Checkout should be more target.

In my next article,  will explain how this web performance test will behave under heavy load. There you can also see server resources consumption done by your application.

Check Application Performance Under Load Test Using Visual Studio

Background

Before starting work in performance optimization, first need to set performance benchmark for your site. As I mentioned in my previous article, if you need to improve your application performance then first check existing performance and then work on optimization to meet performance benchmark.

Introduction

Please read my first article Check .NET Application Performance Using Performance Optimizationbefore, because it is their extension.

Before starting Load test, you should be aware about the difference between Load Test and Performance Test.

Performance testing is the testing, which is performed, to ascertain how the components of a system are performing, given a particular situation as I explained in my previous article. There I explained admin user login into site and created some records and then checked their performance against single user. Whereas Load test is a container of performance test and it run performance test under certain load till it reach their threshold limit. So in Load test same functionality (Admin user) will be run by multiple concurrent users.

Create Load test step by step and verify the results

We will use same project created in last article.

Step 1: Create Load Test: In Solution Explorer, right-click the existing Web Performance and Load Test project node, Add, and then choose Load Test.

Step 2: Load test wizard starts, provide Scenario name. A scenario consists of a set of multiple performance tests.

E.g. Scenario Name: Buy item

  • Test 1: Login into site, Search for item.
  • Test 2: Search Item, Add to cart and Place order.

Here you can set Think time (artificial human delay times between Web requests).

 

Step 3: Set Load Pattern setting, two type of patterns can be provided.

  1. Constant Load: Specify same number of user constantly hit your application for specified time. But at the beginning of the load test, it is not reasonable and realistic demand for any site. So use it carefully.
  2. Step Load: Specify a user load that increases with time up to a defined maximum user load. It is a good option to check, what maximum user load capacity that your application can handle at particular server configuration. As load increases, the server will eventually run out of resources.

Step 4: Set Test Mix Model for load test, here if your application have multiple work flows like Search Items, Add to Cart and Place Order, Upload Master Data. So you can define load according to your work flow means how many virtual users will use which functionality more. In general, Search Items is most widely used by customers in e-commerce site rather than Add to Cart and Upload Master Data.
There are four model types that you can provide.
  1. Based on the total number of tests: In this case, you can select which Web Performance Test will run more when a virtual user starts hitting your application. After load test execution, it match number of test run with assigned test distribution. You will see this below under result section. For example, I have the following 3 Test cases and during load test if there will be 1000 test cycles runs (i.e. same test case runs multiple times), then
    • 600 Search Item test cases should be executed
    • 300 Add to Cart and Place Order tests case should be executed
    • 100 Upload Master Data tests case should be executed

    This describe maximum iterations will occur for Search Item test case.

  2. Based on the number of virtual users: It provide % age of virtual user that will run particular test case. E.g. If there are 1000 active user on your site, then 600 users are using Search Itemfunctionality. It means continually 60% of total user load will keep using Search Item module without keeping track that how many cycles has been completed for Search Item work flow.
  3. Based on user pace: it provide frequency for particular test that will run how much times against per user per hour.
  4. Based on sequential order: Here you can specify the sequence for test cases that need to execute in some order. During load test same order will follow in through multiple loop until load test complete.

Generally, we use only first two options to get more realistic results.

Step 5: Add Web Performance Test that need to be run under load test. Here you can add multiple Web Performance Test (created earlier in previous article). One work flow can be one performance test.
You can also provide Load distribution, value for Test Mix Model selected in step 4, across multiple Web Performance test cases and can provide more realistic load distribution based upon application usage. In the following image, 71% application used by admin and 29% used by accountant.
So for any eCommerce Site, you can create the following type of Web Performance Test cases and distribute load accordingly. E.g.
  • Customer Search Items – 60%
  • Add items to Cart and Place Order – 25%
  • Customer Registration – 5%
  • Upload master data – 10%
Step 6: Specify which type of Network connection will be used by client to connect your site.
Distribute load against different types of network connections e.g. LAN 60%, 3G 15%, CDMA 10% etc. Keep LAN 100% to test application hosted on your local machine.
Step 7: Specify which type of browser used by end user. Create a closer approximation of the Web browsers that will be used with your applications. A Web browser type is randomly associated with a virtual user, based on the browser mix.
Step 8: Specify Counter Set for resources against which you want to collect data from server where Load test will run. There are three counter categories: percentages, counts, and averages. E.g. % CPU usage, SQL Server lock counts, and IIS requests per second.
 
Click Add Computer and select resources against which you want to capture counter set. E.g. ASP.Net, IIS, SQL in above figure.
You can also specify remote server (web site host machine) and threshold rules that set on an individual performance counter to monitor system resource usage during a load test. It keeps monitor what is threshold value and what is current value for counter set. E.g. CPU % threshold value is 80%.
Step 9: Specify Run Settings determine such properties as the duration of the test, warm-up duration, sampling rate, connection model (Web performance tests only), results storage type, validation level and SQL tracing.
There are two options
  1. Specify Load test duration,
    1. Specify the Warm-up duration (hh mm ss). Use the hour, minute and second spin controls.
    2. Specify the Run duration (hh mm ss). Use the hour, minute and second spin controls.
  2. Specify the number of times to run the test. Use the Test iterations spin control.

Apart from that you can also specify Sampling Rate and Validation level (validation that mentioned during web performance test like Response Goal should be 1 second).

Step 10: Double click .loadtest file and click Run test button run the load test.

Click Manage Test Controller, here you can also provide Database connection (Database name isLoadTest2010) where you want to store load test results.

 
NOTE: If you don’t have LoadTest2010 database on server. Then go to IDE folder on your machine where Visual Studio install and search for “loadtestresultsrepository.sql” file and run the same on server to create LoadTest2010 database. E.g. C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\loadtestresultsrepository.sql

Now you have done all the required setting for Load test, so let us run the Load test and verify the results step by step.

Run Load Test and analyze the results

Step 1: Click Run button, after Run successfully different type of result report available like Summary, Graphical, Table or Details. By default, Summary will open and here the following points to be noted down,

  • Top five slowest pages, here you can get list of pages which are taking more time for processing.
  • Total number of Web Performance Test cased execution distribution, as we specified distribution during load test creation admin 71% and accountant 29%.
Type of errors occurs, here ResponseGoal was set 1 second for all pages under admin web performance test but almost all the pages fail to achieve this target. On click of Error link, you can also get exact details.
Step 2: Click Graphs, here you can see multiple graphs at a time. On left hand side, Counter set values for all resources consumed. On right hand side, resources consumption details inform of Graphs and Range (Max, Min and Avg) values.
Under Counter Set panel, under Computer Name node all resources data is available. Some of them are highlighted below.

  • .NET CLR resources: It provide all resources consumption against load test like how much bytes from heap consumed by your application. On right hand side under Key Indicator graph, you can see .NET CLR memory from heap consumption is increasing during load. Here you can take decision if there is any memory leakage or not.
  • IIS (ASP.NET/ASP.Net Application): It provide resource consumption at IIS level like Request/Sec, Session Active, Request Rejected (due to security or any reason), Request Queued, Output Cache Hit Ratio (how much pages processed directly from cache), Cache Hit Ratio (how many times application cache used), etc.
  • Memory: It provide how much memory consumed during load test like % Committed Byte In Use, Available Mbytes (amount of physical memory, in MB, available to running processes).
  • Network Interface: It provide network resources consumption like Bytes sent/sec, Bytes received/sec, and Current Bandwidth. Here you can take decision if bandwidth consumption more, then try to make request/repose size optimize by taking the following action for the same.
  • Use Ajax calls (light weight requests)
  • Use JSON serialization instead of XML serialization (light weight response).
  • Use client side cache, if same static data need to process from server multiple times (make sure not to cache secure or unnecessary data on browser as it also has limited memory).
  • Process: It provide what are processes that used by your application and what are their consumption like “% processor time”, Handle Count, Thread Count taken by sqlsever, devenv (visual studio).
  • SQL Server resources: It provide all consumption at SQL server level like Full scan/sec, Index search/sec, Transactions/sec, Lock Timeout/sec, Number of Deadlock/sec. Here you can see what issues occur at heavy load on database side (e.g. deadlock, timeout, table scan).

To see recourse usage details, double click on the same and their respective details will be available on right hand side in form of Graphs and Range value as in the above image.

In Graphs, horizontal axis represents time duration with fraction of 10 seconds. I run load test only for 3 minutes.

  • Key Indicator graph: I selected the following resources from Counter set
  • User Load: It increases every 10 sec, starts from 10 and reach up to 180 (in 3 min).
  • Transaction/sec: Transactions/sec increase with respect to user load.
  • Bytes in all Heap: It increase with user load. But if it sharply rise then there can be memory leakage problem like resources does not release properly after consumption.

  • Page Response Time: Click pages from Counter Scenario Web Performance Test PagesPage Name, against which you want to check performance across load test. On graph, you can see how it performs when load increase. As load increase, its performance go down. There are multiple properties that you can check against any page like Avg Page Time, Pages/Sec, etc.
 
  • Controller and Agent: System resources like % CPU utilization, Memory utilization, Network I/O bytes sent and received per second, .NET CLR threads can be checked here. Here CPU % usage exceed threshold value 80%.

If you want to share your application results with pears then export test results to excel file. To export into excel format, click Create Excel Report icon under Load Test result toolbar. On click, it will open excel file with the following window and provide required input.

Note: Load Test Excel plug-in (as shown in above image) might not load properly. To correct this, in Microsoft Excel 2010 or later, follow these steps:

  1. In the Office ribbon, choose File.
  2. Choose Options and then choose Add-Ins.
  3. In the drop-down list under Manage, choose COM Add-Ins, and then select Go.
  4. Select the checkbox for Load Test Report Addin.

Once Excel plug-in loaded properly then perform the following steps.

  1. Select Create Report
  2. Select Trend
  3. Provide Report Name
  4. Select Load Test results that need to be provided in report (if same Load Test run multiple time)
  5. Select Resources name against which Counter value needs to be capture in excel report.

It will generate excel report that contains table of contents as first worksheet and remaining counter set values as per workbook as in the following,

If you compare results with previous article, where all the web pages are working fine (expect one stud_entry.aspx) but under load (where max user load is only 180 user) all the pages performance go down as you can see below.
Note: If you already write unit test cases for your application, then you can also run Load test against the same. No need to create Web Performance Test (UI steps recording). Please refer this link for more details: Creating and Running a Load Test Containing Unit Tests.
In last, Load test results also depends upon your current machine configuration and number of processes running over the same. So make sure whenever you run Load test on your local machine keep all the unnecessary process closed to get more corrective results.
After running load test you can get what are the pages that need improvements. Now need to identify which area of code that is taking more time to execute. You can use Stopwatch underSystem.Diagnostics.Stopwatch to get methods that take more execution time, as you can see below

  1. Stopwatch watch = new Stopwatch();
  2. watch.Start();
  3. //Call your code here  
  4. watch.Stop();
  5. // you can capture execution time in logs to verify the results later
  6. Console.WriteLine(“Measured time: “ + watch.Elapsed.TotalMilliseconds + ” ms.”);

For more details about Stopwatch, you can refer here.

By using all these steps, as developer you have scope and area of code that required manipulation from performance point of view. No need to wait for any performance testing team to achieve this results, it will take only 15-20 minutes for any developer to check their application performance.

Let’s summarize both the articles again steps by step through the following diagram. Now you can evaluate your work better than others.


In my next article I will explain, performance optimization check points that need to be taken care in the following phases

Rabbit Mq Shovel Example

Introduction

It has been a while since I have written an article, that is not because I am not busy, far from it, I have been extremely busy working on a productivity tool that I think people may like. I think I will certainly use it, which is partly why I am writing it. I am working on this tool with CodeProject’s own Pete O’Hanlon, and it is coming on well. We hope to have something ready for its first showing some time in the new year. Anyway enough of the reasons why I haven’t written anything for a while, and back to the here and now.

At work just before X’mas, my team leader Richard King (co-author Baboon Converters) and I were looking into providing a MSMQ queue based system where we would have multiple queues in place and each machine in the chain could receive and send. To keep it simple, let’s forget about the duplex comms and just consider it to be a single direction of message travel across machines. The following diagram illustrates roughly what we wanted.

Essentially what we want is some sort of routing from Machine A to Machine B.

This could all be done using pretty standard MSMQ code, and we could just forward the queue messages programmatically, or we could even use MSMQ over WCF or use the new WCF 4.0 RoutingService. We kind of had reservations behind all of these approaches.

  • MSMQ code: Simply too much boilerplate code, sure we could abstract that and end up with something pretty slim, but we wanted to see what we could do without going down this route.
  • MSMQ over WCF: Yeah OK, but lots of config required, and we need to host the service somewhere. Also need to create the MSMQ queues and administer the access rights to these queues to only authorised users.
  • RoutingService: This is quite nice, but it still relies on WCF, so suffers from the same problems as MSMQ over WCF.

We just felt that all of these approaches either involved too much config/setup, or was not quite what we wanted without us writing a load of code. We also felt that this bridge must have been crossed before, so we set about trying to look at available messaging solutions out there (and there are loads); we will now discuss a few of these.

Note: This article is quite specific in that it really is all about talking about how to tackle this routing arrangement that we needed to solve, so if you think this is not for you, no worries, we get that. However if you have a requirement like that, you never know this article may talk about something that you might use. The choice is yours.

Available Frameworks

There are literally hundreds of messaging solutions out there when you really start to look. We looked at three in some detail, which we will talk about briefly below. We will not go into loads of detail, but shall rather list the attributes that each of the framework vendors claim set their frameworks apart.

NServiceBus

Website: http://www.nservicebus.com/

Vendor claims:

  • Bus architecture
  • Publish/Subscribe
  • Durable messages
  • Sagas
  • Scalable across servers
  • Transactions
  • Serialization
  • IOC support

MassTransit

Website: http://masstransit-project.com/

Vendor claims:

  • Bus architecture
  • Sagas
  • Exception management
  • Transactions
  • Serialization
  • Headers
  • Consumer lifecycle
  • Built on top of Rabbit Mq
  • IOC support

RabbitMq

Website: http://www.rabbitmq.com/

Vendor claims:

  • Messaging that just works

Picking a Framework

Now when it came to actually picking a framework, we had the following criteria:

  1. Is it easy to use?
  2. How long will it take to get up and running?
  3. Will it fit our requirement exactly?
  4. Is the API intuitive, would we understand it in 6 months time?
  5. Having never used it, would we know how to fix it if something went wrong?

Based on extensive proof of concepts, we actually ended up going with Rabbit Mq (which you may find strange as its only claim is “Messaging that just works”), the reason being that it just seemed to have less smoke and mirrors than NServiceBus and Mass Transit. We are not saying these two frameworks are not good, they are both really good, it was just for our purposes we wanted something dead simple to use, and after we conducted our tests, we felt that Rabbit Mq was it. It just had much simpler configuration (once you got the hang of it) and did not involve so much esoteric code that only the designers of these frameworks truly understand.

Rabbit Mq also had a lot better documentation than the others, at least we felt it did anyway.

For the rest of this article, we will talk about how to configure Rabbit Mq to send messages from one machine to another, which is what our requirements were. If this does not sound that interesting to you, or you can’t see the benefit of this, then this is probably the best place to call it a day. However if you think this sort of arrangement that we were trying to solve may be of use to you, please read on.

Rabbit Mq

Rabbit Mq calls itself a message broker, where the typical setup is to have a single Rabbit Mq broker (which they refer to as Agents) that sits on a certain box, and simply deals with incoming messages and makes sure these are dispatched accordingly. One of the stranger aspects (at least for a .NET developer) is that Rabbit Mq is actually written in Erlang. So you will need to install that (which we will talk about in a minute), but don’t let that put you off, there are loads of Rabbit Mq clients out there, .NET being one of them.

Now you may be thinking if Rabbit Mq is a broker type arrangement, which is typically something like this:

How can that possibly do the routing of message as specified by our initial requirements?

That doesn’t look much like a broker type architecture, where we have a central broker. Luckily, Rabbit Mq comes with a handy plug-in called “Shovel” which the Rabbit Mq documentation describes as follows:

rabbitmq_shovel: “A plug-in for RabbitMQ that shovels messages from a queue on one broker to an exchange on another broker.”

All of a sudden, we have two machines involved, each running a Rabbit Mq broker. Mmm, sounds more like our initial requirement all of a sudden. Groovy.

So with that knowledge in place, let’s carry on with the rest of the article where we will discuss what we need to install/configure to successfully see that our requirements are met.

Installation

The first thing you will need to do to get Rabbit Mq up and running is install the required bits and pieces. Now for our requirements, since we wanted to have a Rabbit Mq broker on each of the machines in the message chain, all of these and subsequent installation instructions apply to all machines in the message chain (so for our requirements, that would be two machines).

You should now have several folders created.

Erlang

Which should look like this:

Rabbit Mq

Which should look like this:

Plug-ins Installation

Now that we have installed Erlang and the Rabbit Mq server, we need to install two plug-ins which are discussed below:

Installing the WebServer Plug-in

From the Rabbit installation folder (C:\Program Files\RabbitMQ Server\rabbitmq_server-2.7.0\sbin typically), run the following command line:

rabbitmq-plugins.bat enable rabbitmq_management

After running this command line, we need to get the Rabbit Mq server to see these additional plug-ins, so we need to start and stop the Rabbit server (where you will need to run the following command lines, where the command windows are opened with Admin rights):

  • rabbitmq-service stop
  • rabbitmq-service remove
  • rabbitmq-service install
  • rabbitmq-service start

You might want to wrap these up into a Batch (.BAT) file as we will need to use this combination again.

Installing the Shovel Plug-in

From the Rabbit installation folder (C:\Program Files\RabbitMQ Server\rabbitmq_server-2.7.0\sbin typically), run the following command lines:

  • rabbitmq-plugins.bat enable rabbitmq_shovel
  • rabbitmq-plugins.bat enable rabbitmq_shovel_management

As before, after running these command lines, we need to get the Rabbit Mq server to see these additional plug-ins, so we need to start and stop the Rabbit server (where you will need to run the following command lines, where the command windows are opened with Admin rights):

  • rabbitmq-service stop
  • rabbitmq-service remove
  • rabbitmq-service install
  • rabbitmq-service start

Checking the WebServer

Next check that the web server is available, which can be checked using the following URL:http://localhost:55672/#/.

Where username and password are default of:

  1. username =“guest
  2. password “guest

Click the image to see a bigger version.

It can be seen that we have a running web server by which we can monitor all of the Rabbit Mq server components, such as:

  • Connections
  • Queues
  • Exchanges
  • Shovels (which will not work yet, as we have not configured it)

So all good so far, let’s now turn our attention to configuring the Shovel plug-in, shall we?

Configuring Shovel

The Shovel Rabbit Mq plug-in does this:

rabbitmq_shovel: “A plug-in for RabbitMQ that shovels messages from a queue on one broker to an exchange on another broker.”

Before we can use Shovel, we need to configure it.

Creating the Environment Variable

To enable Rabbit Mq to pick up a config file, we need to create an environment variable to tell Rabbit Mq where its config should be obtained from. This should be done as follows where the Variable value should be the path and name of the Rabbit Mq config file excluding the file extension.

Creating the Shovel Config File

The next step is to create a new Rabbit Mq config file which will configure the Shovel plug-in, an example of this config file may look like this. It it worth knowing that this is an Erlang style config file, which is what Rabbit Mquses.

So going back to what we wanted to achieve:

Based on this image, we could end up with a Rabbit Mq config file called Rabbit.config being stored asc\:\RabbitConfig which looks like this (the “.” at the end is important).

For the demo code, Machine A and Machine B were two machines where I work, called “C1801” and “C1799“, and the queue which we communicated on was called “Killer“.

You will need to change these to suit your own requirements.

[{rabbitmq_shovel,
  [{shovels,
    [{killer_push,
      [{sources,      [{broker,"amqp://C1801"}]},
       {destinations, [{broker, "amqp://C1799"}]},
       {queue, <<"Killer">>},
       {ack_mode, on_confirm},
       {publish_properties, [{delivery_mode, 2}]},
       {publish_fields, [{exchange, <<"">>},
                         {routing_key, <<"Killer">>}]},
       {reconnect_delay, 5}
      ]}
     ]
   }]
}].

As before, we need to get the Rabbit Mq server to see these plug-in changes, so we need to start and stop the Rabbit server (where you will need to run the following command lines, where the command windows are opened with Admin rights):

  • rabbitmq-service stop
  • rabbitmq-service remove
  • rabbitmq-service install
  • rabbitmq-service start

And that is all there is to the configuration, at least for our intended scenario anyway. It does take a little bit of getting used to the Erlang style config files, but that is just how it is. You get used to it.

Here is a running version of this taken from our actual work PCs where we have fully tested this scenario withRabbit Mq.

As you can see, this assumes a Rabbit Mq queue called “killer_push” which is the name that is configured in theRabbit Mq config file shown above.

Demo Code

We have included a simple VS2010 demo solution that contains two simple projects, a Sender and a Receiver, which are shown below. These are intentionally simple, so you can see the messages received. You will need to change these for your own purposes.

Sender Code

using System;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

class Send {

    private ConnectionFactory factory = new ConnectionFactory();
    private IConnection connection = null;
    private IModel channel = null;
    private int counter =0;


    public Send()
    {
        factory.HostName = "C1801";
    }

    private void Setup()
    {
        counter = 0;
        connection = connection = factory.CreateConnection();
        channel = connection.CreateModel();
        channel.ModelShutdown += Channel_ModelShutdown;
        connection.CallbackException += Connection_CallbackException;
        connection.ConnectionShutdown += Connection_ConnectionShutdown;

        bool durable = true;
        channel.QueueDeclare("Killer", durable, false, false, null);
    }

    private void Publish()
    {
        IBasicProperties properties = channel.CreateBasicProperties();
        properties.DeliveryMode = 2;
        properties.CorrelationId = "sachas message";

        try
        {
            while (true)
            {
                string message = string.Format("This is the message {0}, {1}", 
                                        ++counter, DateTime.Now.ToShortTimeString());
                byte[] body = System.Text.Encoding.UTF8.GetBytes(message);
                channel.BasicPublish("", "Killer", properties, body);
                Console.ReadLine();
                Console.WriteLine(" [x] Sent1 {0}", message);
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine("SOMETHING IS WRONG!!!!   " + ex.Message);
        }
        finally
        {
            ReStart();
        }
    }

    private void ReStart()
    {
        CleanUp();
        Setup();
        Publish();
    }

    private void CleanUp()
    {
        if (connection != null)
        {
            connection.Dispose();
            connection.Close();
        }
        if (channel != null)
        {
            channel.Dispose();
            channel.Close();
        }
    }

    public static void Main() 
    {
        Send r = new Send();
        r.Setup();
        r.Publish();
        
    }

    private void Connection_ConnectionShutdown(IConnection connection, ShutdownEventArgs reason)
    {
        Console.WriteLine("connection_ConnectionShutdown " + reason.ToString());
        ReStart();
    }

    private void Connection_CallbackException(object sender, CallbackExceptionEventArgs e)
    {
        Console.WriteLine("connection_CallbackException " + e.Exception.StackTrace);
        ReStart();
    }

    private void Channel_ModelShutdown(IModel model, ShutdownEventArgs reason)
    {
        Console.WriteLine("CHANNEL__MODEL_SHUTDOWN " + reason.ToString());
        ReStart();
    }
}

Receiver Code

using RabbitMQ.Client;
using RabbitMQ.Client.Events;
using System.Threading;
using RabbitMQ.Client.Exceptions;
using System;

class Receive {

    private ConnectionFactory factory = new ConnectionFactory();
    
    private IConnection connection = null;
    private IModel channel = null;
    private QueueingBasicConsumer consumer = null;

    public Receive ()
    {
        factory.HostName = "C1799";
    }

    private void Setup()
    {
        connection = factory.CreateConnection();
        channel = connection.CreateModel();
        channel.ModelShutdown += Channel_ModelShutdown;
        connection.CallbackException += Connection_CallbackException;
        connection.ConnectionShutdown += Connection_ConnectionShutdown;

        bool isDurable = true;
        bool exclusive = false;
        bool autoDelete = false;
        bool noAck = false;

        channel.QueueDeclare("Killer", isDurable, exclusive, autoDelete, null);

        consumer = new QueueingBasicConsumer(channel);
        channel.BasicConsume("Killer", noAck, consumer);

        System.Console.WriteLine(" [*] Waiting for messages." +
                                 "To exit press CTRL+C");
    }

    private void CleanUp()
    {
        if (connection != null)
        {
            connection.Dispose();
            connection.Close();
        }
        if (channel != null)
        {
            channel.Dispose();
            channel.Close();
        }
    }

    private void Listen()
    {
        try
        {
            while (true)
            {

                if (!channel.IsOpen)
                    throw new Exception("Channel is closed");

                BasicDeliverEventArgs ea =
                    (BasicDeliverEventArgs)consumer.Queue.Dequeue();

                byte[] body = ea.Body;
                string s = ea.BasicProperties.CorrelationId;
                string message = System.Text.Encoding.UTF8.GetString(body);
                channel.BasicAck(ea.DeliveryTag, false);
                Console.WriteLine(" [x] Received {0}", message);
            }

        }
        catch (Exception ex)
        {
            Console.WriteLine("SOMETHING IS WRONG!!!!   " + ex.Message);
        }
        finally
        {
            CleanUp();
        }
    }

    public static void Main() 
    {
        Receive r = new Receive();
        r.Setup();
        r.Listen();
    }

    private void ReStart()
    {
        connection.CallbackException -= Connection_CallbackException;
        connection.ConnectionShutdown -= Connection_ConnectionShutdown;

        CleanUp();
        Setup();
        Listen();
    }

    private void Connection_ConnectionShutdown(IConnection connection, ShutdownEventArgs reason)
    {
        Console.WriteLine("connection_ConnectionShutdown " + reason.ToString());
        ReStart();
    }

    private void Connection_CallbackException(object sender, CallbackExceptionEventArgs e)
    {
        Console.WriteLine("connection_CallbackException " + e.Exception.StackTrace);
        ReStart();
    }

    private void Channel_ModelShutdown(IModel model, ShutdownEventArgs reason)
    {
        Console.WriteLine("CHANNEL__MODEL_SHUTDOWN " + reason.ToString());
        ReStart();
    }
}

I think the code is pretty self-explanatory, so I will not go into it too much. A lot of this is pretty much what you get from the Rabbit Mq samples, just refactored slightly to the structure shown above.

That’s It

Anyway that is all I wanted to say for now, I realise this is not my normal type of article but rather a step by step instruction type article (which I rarely do), but both Richard and I took a while to get this setup of Rabbit correct, so we just felt it was worth sharing with others. If you like it or feel it’s useful, please take sometime to write a comment, or share a vote, both are welcome. Thanks.

LINK: http://www.codeproject.com/Articles/309786/Rabbit-Mq-Shovel-Example

Object Oriented Design Principles

Who is Audience?

This article is intended for those who have at least a basic idea of Object oriented programming. They know the difference between classes and objects and can talk about the basic pillars of object oriented programming i.e., Encapsulation, Abstraction, Polymorphism and Inheritance.

Introduction

In the object oriented world we only see objects. Objects interact with each other. Classes, Objects, Inheritance, Polymorphism, Abstraction are common vocabulary we hear in our day-to-day careers.

In the modern software world every software developer uses object oriented language of some kind, but the question is, does he really know what object oriented programming means? Does he know that he is working as an object oriented programmer? If the answer is yes, is he really using the power of object oriented programming?

In this article we will go beyond the basic pillars of object oriented programming and talk about object oriented design.

Object Oriented Design

It’s a process of planning a software system where objects will interact with each other to solve specific problems. The saying goes, “Proper Object oriented design makes a developer’s life easy, whereas bad design makes it a disaster.”

How does anyone start?

When anyone starts creating software architecture their intentions are good. They try to use their existing experience to create an elegant and clean design.

Over time software starts to rot. With every feature request or change software design alters its shape and eventually the simplest changes to application requires a lot of effort and, more importantly, creates higher chances for more bugs.

Who is to Blame

Software solves real life business problems and since business processes keep evolving, software keeps on changing.

Changes are an integral part of the software world. Obviously because clients are paying they will demand for what they are expecting. So we cannot blame “change” for the degradation of software design. It is our design which is at fault.

One of the biggest reasons for the damaging of software design is the introduction of unplanned dependencies into the system. Every part of the system is dependant on some other part and so changing one part will affect another part. If we are able to manage those dependencies we will easily maintain the software system and software quality too.

Example

Solution – Principles, Design Patterns and Software architecture

  • Software architecture like MVC, 3-Tier, MVP tells us how overall projects are going to be structured.
  • Design pattern allows us to reuse the experience or rather, provides reusable solutions to commonly occurring problems. Example – an object creation problem, an instance management problem, etc.
  • Principles tell us, do these and you will achieve this. How you will do it is up to you. Everyone defines some principles in their lives like, “I never lie,” or, “I never drink alcohol,” etc. He/she follow these principles to make his/her life easy, but how will he/she stick to these principles is up to the individual.

In the same way, object oriented design is filled with many principles which let us manage the problems with software design.

Mr. Robert Martin (commonly known as Uncle Bob) categorized them as

  1. Class Design principles – Also called SOLID
  2. Package Cohesion Principles
  3. Package Coupling principle

In this article we will talk about SOLID principles with practical example.

SOLID

It’s an acronym of J five principles introduced by Mr. Robert Martin (commonly known as Uncle Bob) i.e., Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion. It’s said (Wikipedia) when all five principles are applied together intend to make it more likely that a programmer will create a system that is easy to maintain and extend over time. Let’s talk about every principle in detail

I) S – SRP – Single responsibility Principle

Real world comparison

I work as a team leader for one of the software firms in India. In my spare time I do some writing, newspaper editing and other various projects. Basically, I have multiple responsibilities in my life.

When something bad happens at my work place, like when my boss scolds me for some mistake, I get distracted from my other work. Basically, if one thing goes bad, everything will mess up.

Identify Problem in Programming

Before we talk about this principle I want you take a look at following class.

  • Every time insert logic changes, this class will change.
  • Every time report format changes, this class will changes.

What is the issue?

Every time one gets changed there is a chance that the other also gets changed because both are staying in the same home and both have same parent. We can’t control everything. So a single change leads to double testing (or maybe more).

What is SRP?

SRP says “Every software module should have only one reason to change”.

  • Software Module – Class, Function etc.
  • Reason to change – Responsibility

Solutions which will not Violate SRP

Now it’s up to us how we achieve this. One thing we can do is create three different classes

  1. Employee – Contains Properties (Data)
  2. EmployeeDB – Does database operations
  3. EmplyeeReport – Does report related tasks
public class Employee
{
    public string EmployeeName { get; set; }
    public int EmployeeNo { get; set; }
}
public class EmployeeDB
{
    public void Insert(Employee e) 
    {
        //Database Logic written here
    }
 public Employee Select() 
    {
        //Database Logic written here
    }
}
public class EmployeeReport
{
    public void GenerateReport(Employee e)
    {
        //Set report formatting
    }
}

Note: This principle also applies to methods. Every method should have a single responsibility.

Can a single class can have multiple methods?

The answer is YES. Now you might ask how it’s possible that

  1. A class will have single responsibility.
  2. A method will have single responsibility.
  3. A class may have more than one method.

Well the answer for this question is simple. It’s context. Here, responsibility is related to the context in which we are speaking. When we say class responsibility it will be somewhat at higher level. For instance, the EmployeeDBclass will be responsible for employee operations related to the Database whereas the EmployeeReport class will be responsible for employee operations related to reports.

When it comes to methods it will be at lower level. For instance look at following example:

//Method with multiple responsibilities – violating SRP
public void Insert(Employee e)
{     
    string StrConnectionString = "";       
    SqlConnection objCon = new SqlConnection(StrConnectionString); 
    SqlParameter[] SomeParameters=null;//Create Parameter array from values
    SqlCommand objCommand = new SqlCommand("InertQuery", objCon);
    objCommand.Parameters.AddRange(SomeParameters);
    ObjCommand.ExecuteNonQuery();
}

//Method with single responsibility – follow SRP
public void Insert(Employee e)
{            
    SqlConnection objCon = GetConnection();
    SqlParameter[] SomeParameters=GetParameters();
    SqlCommand ObjCommand = GetCommand(objCon,"InertQuery",SomeParameters);
    ObjCommand.ExecuteNonQuery();
}

private SqlCommand GetCommand(SqlConnection objCon, string InsertQuery, SqlParameter[] SomeParameters)
{
    SqlCommand objCommand = new SqlCommand(InsertQuery, objCon);
    objCommand.Parameters.AddRange(SomeParameters);
    return objCommand;
}

private SqlParameter[] GetParaeters()
{
    //Create Paramter array from values
}

private SqlConnection GetConnection()
{
    string StrConnectionString = "";
    return new SqlConnection(StrConnectionString);
}

Testing is advantageous in and of itself, but that the code has become readable is an additional advantage. The more the code is readable the simpler it seems.

II) O – OCP – Open Close Principle

Real World Comparison

Let’s assume you want to add one more floor between the first and second floor in your two floor house. Do you think it is possible? Yes it is, but is it feasible? Here are some options:

  • One thing you could have done at time you were building the house first time was make it with three floors, keeping second floor empty. Then utilize the second floor anytime you want. I don’t know how feasible that is, but it is one solution.
  • Break the current second floor and build two new floors, which is not sensible.

Identify Problem in Programming

Let’s say the Select method in the EmployeeDB class is used by two clients/screens. One is made for normal employees, one is made for managers, and the Manager Screen needs a change in the method.

If I make changes in the Select method to satisfy the new requirement, other UI will also be affected. Plus making changes in existing tested solution may result in unexpected errors.

What is OCP?

It says, “Software modules should be closed for modifications but open for extensions.” An orthogonal statement.

Solution which will not violate OCP

1) Use of inheritance

We will derive a new class called EmployeeManagerDB from EmployeeDB and override the Select method as per the new requirement.

public class EmployeeDB
{      
    public virtual Employee Select()
    {
        //Old Select Method
    }
}
public class EmployeeManagerDB : EmployeeDB
{
    public override Employee Select()
    {
        //Select method as per Manager
        //UI requirement
    }
}

Note: Now the design is considered good object oriented design if this change is anticipated at the time of the design and already has a provision within for extension (method made virtual). Now the UI code will look like:

//Normal Screen
EmployeeDB objEmpDb = new EmployeeDB();
Employee objEmp = objEmpDb.Select();

//Manager Screen
EmployeeDB objEmpDb = new EmployeeManagerDB();
Employee objEmp = objEmpDb.Select();

2) Extension method

If you are using .NET 3.5 or later then there is a second way called extension method that will let us add new methods to existing types without even touching them.

Note: There may be some more ways to achieve the desired result. As I said these are principles not commandments.

III) L – LSP – Liskov substitution principle

What is LSP?

You might be wondering why we are defining it prior to examples and problem discussions. Simply put, I thought it will be more sensible here.

It says, “Subclasses should be substitutable for base classes.” Don’t you think this statement is strange? If we can always write BaseClass b=new DerivedClass() then why would such a principle be made?

Real World Comparison

A father is a real estate business man whereas his son wants to be cricketer.A son can’t replace his father in spite of the fact that they belong to same family hierarchy.

Identify Problem in Programming

Let’s talk about a very common example.

Normally when we talk about geometric shapes, we call a rectangle a base class for square. Let’s take a look at code snippet.

public class Rectangle
{
    public int Width { get; set; }
    public int Height { get; set; }
}

public class Square:Rectangle
{
    //codes specific to
    //square will be added
}

One can say,

Rectangle o = new Rectangle();
o.Width = 5;
o.Height = 6;

Perfect but as per LSP we should be able to replace Rectangle with square. Let’s try to do so.

Rectangle o = new Square();
o.Width = 5;
o.Height = 6;

What is the matter? Square cannot have different width and height.

What it means? It means we can’t replace base with derived. Means we are violating LSP.

Why don’t we make width and height virtual in Rectangle, and override them in Square?

Code snippet

public class Square : Rectangle 
{
    public override int Width
    {
        get{return base.Width;}
        set
        {
            base.Height = value;
            base.Width = value;
        }
    }
    public override int Height
    {
        get{return base.Height;}
        set
        {
            base.Height = value;
            base.Width = value;
        }
    }        
}

We can’t because doing so we are violating LSP, as we are changing the behavior of Width and Height properties in derived class (for Rectangle height and width cannot be equal, if they are equal it’s cannot be Rectangle).

(It will not be a kind of replacement).

Solution which will not violate LSP

There should be an abstract class Shape which looks like:

public abstract class Shape
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }
}

Now there will be two concrete classes independent of each other, one rectangle and one square, both of which will be derived from Shape.

Now the developer can say:

Shape o = new Rectangle();
o.Width = 5;
o.Height = 6;

Shape o = new Square();
o.Width = 5; //both height and width become 5
o.Height = 6; //both height and width become 6

Even after overriding in derived classes we are not changing the behavior of width and height, because when we talk about shape, there will not be any fixed rule for width and height. They may be equal, or may not be.

IV) I – ISP– Interface Segregation principle

Real World Comparison

Let’s say you purchase a new desktop PC. You will find a couple of USB ports, some serial ports, a VGA port etc. If you open the cabinet you will see lots of slots on the motherboard used for connecting various parts with each other, mostly used by hardware engineers at the time of assembly.

Those internal slots will not be visible until you open the cabinet. In short, only the required interfaces are made available/visible to you. Imagine a situation where everything was external or internal. Then there is a greater chances of hardware failure (as if life wasn’t hard enough for computer users).

Let’s say we will go to a shop to buy something (let’s say, for instance, to buy a cricket bat).

Now imagine a situation where the shopkeeper starts showing you the ball and stumps as well. It may be possible that we will get confused and may end up buying something we did not require. We may even forget why we were there in the first place.

Identify Problem in Programming

Let’s say we want to develop a Report Management System. Now, the very first task is creating a business layer which will be used by three different UIs.

  1. EmployeeUI – Show reports related to currently logged in employee
  2. ManagerUI – Show reports related to himself and the team for which he/manager belongs.
  3. AdminUI – Show reports related to individual employee ,related to team and related to company like profit report.
public interface IReportBAL
{
    void GeneratePFReport();
    void GenerateESICReport();

    void GenerateResourcePerformanceReport();
    void GenerateProjectSchedule();

    void GenerateProfitReport();
}
public class ReportBAL : IReportBAL
{    
    public void GeneratePFReport()
    {/*...............*/}

    public void GenerateESICReport()
    {/*...............*/}

    public void GenerateResourcePerformanceReport()
    {/*...............*/}

    public void GenerateProjectSchedule()
    {/*...............*/}

    public void GenerateProfitReport()
    {/*...............*/}
}
public class EmployeeUI
{
    public void DisplayUI()
    {
        IReportBAL objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
    }
}
public class ManagerUI
{
    public void DisplayUI()
    {
        IReportBAL objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
        objBal.GenerateResourcePerformanceReport ();
        objBal.GenerateProjectSchedule ();
    }
}
public class AdminUI
{
    public void DisplayUI()
    {
        IReportBAL objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
        objBal.GenerateResourcePerformanceReport();
        objBal.GenerateProjectSchedule();
        objBal.GenerateProfitReport();
    }
}

Now in each UI when the developer types “objBal” the following intellisense will be shown:

What is the problem?

The developer who is working on EmployeeUI gets access to all the other methods as well, which may unnecessarily cause him/her confusion.

What is ISP?

It states that “Clients should not be forced to implement interfaces they don’t use.” It can also be stated as “Many client specific interfaces are better than one general purpose interface.” In simple words, if your interface is fat, break it into multiple interfaces.

Update code to follow ISP

public interface IEmployeeReportBAL
{
    void GeneratePFReport();
    void GenerateESICReport();
}
public interface IManagerReportBAL : IEmployeeReportBAL
{
    void GenerateResourcePerformanceReport();
    void GenerateProjectSchedule();
}
public interface IAdminReportBAL : IManagerReportBAL
{
    void GenerateProfitReport();
}
public class ReportBAL : IAdminReportBAL 
{    
    public void GeneratePFReport()
    {/*...............*/}

    public void GenerateESICReport()
    {/*...............*/}

    public void GenerateResourcePerformanceReport()
    {/*...............*/}

    public void GenerateProjectSchedule()
    {/*...............*/}

    public void GenerateProfitReport()
    {/*...............*/}
}
public class EmployeeUI
{
    public void DisplayUI()
    {
        IEmployeeReportBAL objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
    }
}
public class ManagerUI
{
    public void DisplayUI()
    {
        IManagerReportBAL  objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
        objBal.GenerateResourcePerformanceReport ();
        objBal.GenerateProjectSchedule ();
    }
}
public class AdminUI
{
    public void DisplayUI()
    {
        IAdminReportBAL  objBal = new ReportBAL();
        objBal.GenerateESICReport();
        objBal.GeneratePFReport();
        objBal.GenerateResourcePerformanceReport();
        objBal.GenerateProjectSchedule();
        objBal.GenerateProfitReport();
    }
}

By following ISP we can make client see, what he is required to see.

V) D – DIP– Dependency Inversion principle

Real World Comparison

Let’s talk about our desktop computers. Different parts such as RAM, a hard disk, and CD-ROM (etc.) are loosely connected to the motherboard. That means that, if, in future in any part stops working it can easily be replaced with a new one. Just imagine a situation where all parts were tightly coupled to each other, which means it would not be possible to remove any part from the motherboard. Then in that case if the RAM stops working we have to buy new motherboard which is going to be very expensive.

Identify Problem in Programming

Look at the following code.

public class CustomerBAL
{
    public void Insert(Customer c)
    {
        try
        {
            //Insert logic
        }
        catch (Exception e)
        {
            FileLogger f = new FileLogger();
            f.LogError(e);
        }
    }
}

public class FileLogger
{
    public void LogError(Exception e)
    {
        //Log Error in a physical file
    }
}

In the above code CustomerBAL is directly dependent on the FileLogger class which will log exceptions in physical file. Now let’s assume tomorrow management decides to log exceptions in the Event Viewer. Now what? Change existing code. Oh no! My God, that might create a new error!

What is DIP?

It says, “High level modules should not depend upon low level modules. Rather, both should depend upon abstractions.”

Solution with DIP

public interface ILogger
{
    void LogError(Exception e);
}

public class FileLogger:ILogger
{
    public void LogError(Exception e)
    {
        //Log Error in a physical file
    }
}
public class EventViewerLogger : ILogger
{
    public void LogError(Exception e)
    {
        //Log Error in a physical file
    }
}
public class CustomerBAL
{
    private ILogger _objLogger;
    public CustomerBAL(ILogger objLogger)
    {
        _objLogger = objLogger;
    }

    public void Insert(Customer c)
    {
        try
        {
            //Insert logic
        }
        catch (Exception e)
        {            
            _objLogger.LogError(e);
        }
    }
}

As you can see the client depends on abtraction i.e, ILogger which can be set to an instance of any derived class.

So now we’ve covered all five principles of SOLID. Thanks Uncle Bob.

Is it end?

Now the question is, are there more principles other than those categorized by Uncle Bob? The answer is yes, but we will not going to describe each and every thing in detail for now. But they are:

  • Program to Interface Not Implementation.
  • Don’t Repeat Yourself.
  • Encapsulate What Varies.
  • Depend on Abstractions, Not Concrete classes.
  • Least Knowledge Principle.
  • Favor Composition over Inheritance.
  • Hollywood Principle.
  • Apply Design Pattern wherever possible.
  • Strive for Loosely Coupled System.
  • Keep it Simple and Sweet / Stupid.

Conclusion

We can’t avoid changes. The only thing we can do is develop and design software in such a way that it is able to handle such changes.

  • SRP should be kept in mind while creating any class, method or any other module (which even applies to SQL stored procedures and functions). It makes code more readable, robust, and testable.
  • As per my experience we can’t follow DIP each and every time, sometimes we have to depend on concrete classes. The only thing we have to do is understand the system, requirements and environment properly and find areas where DIP should be followed.
  • Following DIP and SRP will opens a door to implement OCP as well.
  • Make sure to create specific interfaces so that complexities and confusions will be kept away from end developers, and thus, the ISP will not get violated.
  • While using inheritance take care of LSP.

Hope all of you enjoyed reading this article. Thank you for the patience.

LINK gốc: http://www.codeproject.com/Articles/567768/Object-Oriented-Design-Principles

IIS Crash – những lưu ý khi sử dụng đệ quy trong C#

Chào mọi người.
– Hiện tại mình hay dùng đệ quy cho các phần hay làm như các phần về cây: cây nhóm đội, cây công ty vv.
– Trong trường hợp đẹp thì việc sử dụng đệ quy không có vấn đề gì.
+ Hôm nay mình gặp trường hợp chạy IIS toàn bị Crash mà không biết nguyên nhân tại sao.

Log Name: Application
Source: Application Error
Date: 11/17/2015 4:11:35 PM
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
User: N/A
Computer: BASRV35.ba.vn
Description:
Faulting application name: w3wp.exe, version: 8.5.9600.16384, time stamp: 0x5215df96
Faulting module name: clr.dll, version: 4.0.30319.34209, time stamp: 0x5348a1ef
Exception code: 0xc00000fd
Fault offset: 0x00000000000056a4
Faulting process id: 0x9c30
Faulting application start time: 0x01d12117d7418b09
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll
Report Id: 32cccab2-8d0b-11e5-80c6-002590d58429
Faulting package full name:

+ Sau khi google thì mình tìm được 1 bài có mô tả Event 0xc00000fd is a StacokOverflowException)
(+) http://stackoverflow.com/questions/17189522/iis-crash-on-stack-overflow-unhandled-microsoft-net-4-5-asp-net-mvc-3
VD: Code trước khi chỉnh sửa

public class RecuriveBefore
{
public void Recursive(int value, int level)
{
try
{

// Write call number and call this method again.
// … The stack will eventually overflow.
Console.WriteLine(value);
Recursive(++value, level + 1);

}
catch (StackOverflowException ex)
{
Console.WriteLine(“Exception: {0}”, ex.Message);
}
}
}
class Program
{
static void Main()
{
// Begin the infinite recursion.
RecuriveBefore recursiveBefore = new RecuriveBefore();
//recursiveBefore.Recursive(0, 0);

Console.ReadLine();
}
}

– Chỉnh lại với các hàm đệ quy
+ Thêm 1 biến để quản lý số lần gọi đệ quy của 1 Method
+ Khi quá số lần max thì thoát luôn hàm đệ quy, nếu không thì lỗi StackOverflowException xuất hiện, mà khi lỗi này xuất hiện thì apps tèo rồi.
VD: Code sau khi chỉnh sửa

public class RecuriveAfter
{
private int MaxDepth = 2000;

public void Recursive(int value, int level)
{
try
{
if (level < MaxDepth)
{
// Write call number and call this method again.
// … The stack will eventually overflow.
Console.WriteLine(value);
Recursive(++value, level + 1);
}
else
{
Console.WriteLine(“Level:” + level);
Console.WriteLine(“Exit Recursive.”);
}
}
catch (StackOverflowException ex)
{
Console.WriteLine(“Exception: {0}”, ex.Message);
}
}
}
class Program
{
static void Main()
{
RecuriveAfter recursiveAfter = new RecuriveAfter();
recursiveAfter.Recursive(0, 0);

Console.ReadLine();
}
}