Skip to main content

Easy way to parse SQL data reader objects

Many a times, I have seen that people end up writing lot of codes to read from SQL data reader or there is no fail safe mechanism to handle, if parse fails out.

In this article, I will try to cover up above issue and implement uniform way to handle things with fail safe mechanism using TryParse approach. Mainly, I will extend inbuilt try parse mechanism comes with .Net. Int.TryParse and similar type of methods was introduced in 2.0 Framework, so the code should be compatible with 2.0 or higher framework.

Let's directly dive into usage since we have just single function DbTryParse. I will be explaining function on later stage.
We can use with fail safe mechanism where some values are not properly parsed then row will get skipped or in normal way, we define some default value and move ahead with other rows.

 int id;  
 string val;  
 double dbl;  
 reader["id"].DbTryParse(out id, int.TryParse, 89);  
 reader["val"].DbTryParse(out val, "NA");  
 reader["doubleTest"].DbTryParse(out dbl, double.TryParse);  

I have tried with integer, string and double in similar way other types of variable can be used as well.
In this, I am reading integer Id column and sending logic for parsing with int.TryParse. If parse fails then it will set 100 as a default, this is optional parameter. Check out double parsing where we are not setting the default value.

Since we do not have inbuilt function like TryParse for string and it's a reference type. There is no need of handler and it is implemented in different way that we will see it later.

 reader["val"].DbTryParse(out val, "NA");  

If we want to skip rows on failure of any parse, then we could use:

 if (reader["id"].DbTryParse(out id, int.TryParse) &&  
 reader["val"].DbTryParse(out val) &&  
 reader["doubleTest"].DbTryParse(out dbl, double.TryParse))  
 {  
     Console.WriteLine("Parsing successful");  
 }  
 else  
 {  
     Console.WriteLine("Parsing failed.");  
 }  

That is all we need for parsing. The single DbTryParse is taking care of every possible parses.

Let's look into the implementation part. If we see all TryParse from .Net they accept string as first parameter and then the out parameter to parse value. For that a TryParseHandler<T> delegate wrapper is created. The generic parse functions are restricted with struct type which takes care of value type. The same function is overloaded to handle string type.

Here are codes:

   public static class CommonExtensionMethods  
   {  
     /// <summary>  
     /// Try parse handler  
     /// </summary>  
     /// <typeparam name="T">The result type</typeparam>  
     /// <param name="value">The string literal.</param>  
     /// <param name="result">The result of typecasting.</param>  
     /// <returns>True, if typecasting is successful, else false.</returns>  
     public delegate bool TryParseHandler<T>(string value, out T result);  
     /// <summary>  
     /// Generic try parse.  
     /// </summary>  
     /// <typeparam name="T">Type of the value</typeparam>  
     /// <param name="value">The string literal.</param>  
     /// <param name="parsedValue">The parsed value.</param>  
     /// <param name="handler">The type casting handler.</param>  
     /// <param name="defaultValue">The default value. This value will be set in case of failed parsing.</param>  
     /// <returns>Parsed object</returns>  
     /// <exception cref="System.ArgumentNullException">handler</exception>  
     public static bool TryParse<T>(this string value, out T parsedValue, TryParseHandler<T> handler,  
       T defaultValue = default(T))  
       where T : struct  
     {  
       if (handler == null)  
       {  
         throw new ArgumentNullException("handler");  
       }  
       if (String.IsNullOrEmpty(value))  
       {  
         parsedValue = defaultValue;  
         return false;  
       }  
       return handler(value, out parsedValue);  
     }  
     /// <summary>  
     /// Generic try parse for databases object.  
     /// </summary>  
     /// <typeparam name="T">The type of object for parsing value</typeparam>  
     /// <param name="val">The value.</param>  
     /// <param name="parsedValue">The parsed value.</param>  
     /// <param name="handler">The parsing handler.</param>  
     /// <param name="defaultValue">The default value. This value will be set in case of failed parsing.</param>  
     /// <returns>  
     /// Parsed object  
     /// </returns>  
     public static bool DbTryParse<T>(this object val, out T parsedValue, TryParseHandler<T> handler  
       , T defaultValue = default(T))  
       where T : struct  
     {  
       if (val == DBNull.Value)  
       {  
         parsedValue = defaultValue;  
         return false;  
       }  
       return Convert.ToString(val).TryParse(out parsedValue, handler, defaultValue);  
     }  
     /// <summary>  
     /// Databases object parsing to string.  
     /// </summary>  
     /// <param name="val">The value.</param>  
     /// <param name="parsedValue">The parsed value.</param>  
     /// <param name="defaultValue">The default value. This value will be set in case of failed parsing.</param>  
     /// <returns>String representation of parsed object</returns>  
     public static bool DbTryParse(this object val, out string parsedValue  
       , string defaultValue = null)  
     {  
       if (val == DBNull.Value)  
       {  
         parsedValue = defaultValue;  
         return false;  
       }  
       parsedValue = Convert.ToString(val) ?? defaultValue;  
       return parsedValue == defaultValue ? false : true;  
     }  
     /// <summary>  
     /// Databases object parsing to string.  
     /// </summary>  
     /// <param name="val">The value.</param>  
     /// <param name="defaultValue">The default value. This value will be set in case of failed parsing.</param>  
     /// <returns>String representation of given object</returns>  
     public static string DbParse(this object val, string defaultValue = null)  
     {  
       return val == DBNull.Value ? defaultValue : Convert.ToString(val) ?? defaultValue;  
     }  
   }  

Source: http://www.mindfiresolutions.com/Database-Reader-Object-Parsing-in-NET-2673.php

Popular posts from this blog

Handling JSON DateTime format on Asp.Net Core

This is a very simple trick to handle JSON date format on AspNet Core by global settings. This can be applicable for the older version as well.

In a newer version by default, .Net depends upon Newtonsoft to process any JSON data. Newtonsoft depends upon Newtonsoft.Json.Converters.IsoDateTimeConverter class for processing date which in turns adds timezone for JSON data format.

There is a global setting available for same that can be adjusted according to requirement. So, for example, we want to set default formatting to US format, we just need this code.


services.AddMvc() .AddJsonOptions(options => { options.SerializerSettings.DateTimeZoneHandling = "MM/dd/yyyy HH:mm:ss"; });



Elegantly dealing with TimeZones in MVC Core / WebApi

In any new application handling TimeZone/DateTime is mostly least priority and generally, if someone is concerned then it would be handled by using DateTime.UtcNow on codes while creating current dates and converting incoming Date to UTC to save on servers.
Basically, the process is followed by saving DateTime to UTC format in a database and keep converting data to native format based on user region or single region in the application's presentation layer.
The above is tedious work and have to be followed religiously. If any developer misses out the manual conversion, then that area of code/view would not work.
With newer frameworks, there are flexible ways to deal/intercept incoming or outgoing calls to simplify conversion of TimeZones.
These are steps/process to achieve it. 1. Central code for storing user's state about TimeZone. Also, central code for conversion logic based on TimeZones. 2. Dependency injection for the above class to be able to use globally. 3. Creating Mo…

LDAP with ASP.Net Identity Core in MVC with project.json

Lightweight Directory Access Protocol (LDAP), the name itself explain it. An application protocol used over an IP network to access the distributed directory information service.

The first and foremost thing is to add references for consuming LDAP. This has to be done by adding reference from Global Assembly Cache (GAC) into project.json

"frameworks": { "net461": { "frameworkAssemblies": { "System.DirectoryServices": "4.0.0.0", "System.DirectoryServices.AccountManagement": "4.0.0.0" } } },
These System.DirectoryServices and System.DirectoryServices.AccountManagement references are used to consume LDAP functionality.

It is always better to have an abstraction for irrelevant items in consuming part. For an example, the application does not need to know about PrincipalContext or any other dependent items from those two references to make it extensible. So, we can begin with some bas…

Trim text in MVC Core through Model Binder

Trimming text can be done on client side codes, but I believe it is most suitable on MVC Model Binder since it would be at one place on infrastructure level which would be free from any manual intervention of developer. This would allow every post request to be processed and converted to a trimmed string.

Let us start by creating Model binder

using Microsoft.AspNetCore.Mvc.ModelBinding; using System; using System.Threading.Tasks; public class TrimmingModelBinder : IModelBinder { private readonly IModelBinder FallbackBinder; public TrimmingModelBinder(IModelBinder fallbackBinder) { FallbackBinder = fallbackBinder ?? throw new ArgumentNullException(nameof(fallbackBinder)); } public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } var valueProviderResult = bindingContext.ValueProvider.GetValue(bindingC…

Architecture solution composting Repository Pattern, Unit Of Work, Dependency Injection, Factory Pattern and others

Project architecture is like garden, we plant the things in certain order and eventually they grow in similar manner. If things are planted well then they will all look(work) great and easier to manage. If they grow as cumbersome it would difficult to maintain and with time more problems would be happening in maintenance.

There is no any fixed or known approach to decide project architecture and specially with Agile Methodology. In Agile Methodology, we cannot predict how our end products will look like similarly we cannot say a certain architecture will fit well for entire development lifespan for project. So, the best thing is to modify the architecture as per our application growth. I understand that it sounds good but will be far more problematic with actual development. If it is left as it is then more problems will arise with time. Just think about moving plant vs a full grown tree.

Coming to technical side, In this article, I will be explaining about the various techniques tha…

Configuring Ninject, Asp.Net Identity UserManager, DataProtectorTokenProvider with Owin

It can be bit tricky to configure both Ninject and Asp.Net Identity UserManager if some value is expected from DI to configure UserManager. We will look into configuring both and also use OwinContext to get UserManager.

As usual, all configuration need to be done on Startup.cs. It is just a convention but can be used with different name, the important thing is to decorate class with following attribute to make it Owin start-up:

[assembly: OwinStartup(typeof(MyProject.Web.Startup))]
Ninject configuration

Configuring Ninject kernel through method which would be used to register under Owin.

Startup.cs
public IKernel CreateKernel() { var kernel = new StandardKernel(); try { //kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); // TODO: Put any other injection which are required. return kernel; } catch { kernel.Dispose(); throw; }…

Kendo MVC Grid DataSourceRequest with AutoMapper - Advance

The actual process to make DataSourceRequest compatible with AutoMapper was explained in my previous post Kendo MVC Grid DataSourceRequest with AutoMapper, where we had created custom model binder attribute and in that property names were changed as data models.

In this post we will be looking into using AutoMapper's Queryable extension to retrieve the results based on selected columns. When Mapper.Map<RoleViewModel>(data) is called it retrieves all column values from table. The Queryable extension provides a way to retrieve only selected columns from table. In this particular case based on properties of RoleViewModel.
The previous approach that we implemented is perfect as far as this article (3 Tips for Using Telerik Data Access and AutoMapper) is concern about performance where it states: While this functionality allows you avoid writing explicit projection in to your LINQ query it has the same fatal flaw as doing so - it prevents the query result from being cached.
Since …

Kendo MVC Grid DataSourceRequest with AutoMapper

Kendo Grid does not work directly with AutoMapper but could be managed by simple trick using mapping through ToDataSourceResult. The solution works fine until different filters are applied.
The problems occurs because passed filters refer to view model properties where as database model properties are required after AutoMapper is implemented.
So, the plan is to intercept DataSourceRequest  and modify names based on database model. To do that we are going to create implementation of CustomModelBinderAttribute to catch calls and have our own implementation of DataSourceRequestAttribute from Kendo MVC. I will be using same source code from Kendo but will replace column names for different criteria for sort, filters, group etc.
Let's first look into how that will be implemented.
public ActionResult GetRoles([MyDataSourceRequest(GridId.RolesUserGrid)] DataSourceRequest request) { if (request == null) { throw new ArgumentNullException("reque…

Global exception handling and custom logging in AspNet Core with MongoDB

In this, we would be looking into logging and global exception handling in the AspNet Core application with proper registration of logger and global exception handling.

Custom logging
The first step is to create a data model that we want to save into DB.

Error log Data model
These are few properties to do logging which could be extended or reduced based on need.

public class ErrorLog { /// <summary> /// Gets or sets the Error log identifier. /// </summary> /// <value> /// The Error log identifier. /// </value> [BsonRepresentation(BsonType.ObjectId)] public ObjectId Id { get; set; /// <summary> /// Gets or sets the date. /// </summary> /// <value> /// The date. /// </value> public DateTime Date { get; set; } /// <summary> /// Gets or sets the thread. /// </summary> /// <v…

T4, Generating interface automatically based on provided classes

With new techniques and patterns interface plays a key role in application architecture. Interface makes application extendable like defining file upload interface and implementing based on file system, Azure Blob storage, Amazon S3. At starting we might be implementing based on Azure Blob but later we might move to Windows based file system and so on.

Ideally we create interface based on need and start implementing actual default implementation class. Many a times at starting of implementation there is one to one mapping between Interface and Class. Like from above example File upload interface and the initial or default class implementation that we design and with time it will get extended.
In this article, we will try to create interface based on default class implementation. This is not at all recommended in Test Driven Design (TDD) where we test the application before actual code implementation but I feel sometimes and in some situations it is okay do that and test straight afte…