Skip to main content

Client side Validation for data driven view engine

The prerequisite for this is to have a designed database driven view engine. This can be a good guidance to implement DB driven view engine Data Driven Custom View Engine in ASP.NET MVC (http://www.dotnetcurry.com/aspnet-mvc/946/data-driven-custom-view-engine-aspnet-mvc).

If we talk about the concept then we can say for DB driven view engine a dynamic form/screen table would require along with the associate attribute set for controls. The controls, Attribute set can have constraints like Required, MaxLength, RegEx etc. similar to available DataAnnotation implementation, just that it has to come through DB.

The jQuery unobtrusive validation is all about adding certain HTML 5 data attributes. So, if we can find rules of the controls (required, max length etc) and set it to HTML attribute from view engine then we are done.

While designing DB driven view engine, there should be a place where we need to loop through available controls to identify it's type and write as HTML/element. A common attribute set can be applied to all controls including the validation rule.

This  would allow us to get key value validation attributes.

 /// <summary>  
 /// Gets the unobtrusive validation attributes.  
 /// </summary>  
 /// <param name="control">The field for dynamic controls.</param>  
 /// <returns>Validation attributes wrapped in Dictionary.</returns>  
 private IDictionary<string, object> GetUnobtrusiveValidationAttributes(AttributeSet control)  
 {  
      IDictionary<string, object> validationAttribs = new Dictionary<string, object>();  
      ModelMetadata metaData = ModelMetadata.FromStringExpression(GetFieldName(control), Helper.ViewData);  
      var clientValidator = new AttributeSetModelValidatorProvider(metaData, ControllerContext,  
           new MaxLengthAttribute(), // Dummy attribute passed  
           control);  
      UnobtrusiveValidationAttributesGenerator.GetValidationAttributes(clientValidator.GetClientValidationRules(), validationAttribs);  
      return validationAttribs;  
 }  

The AttributeSetModelValidatorProvider is a custom class designed to get the validation configuration. The class inherits from DataAnnotationsModelValidator, there are certain parameters that need to be passed for initialization that is why few dummy data are passed. In general, it generates single validation rule but in our case, it has to be multiple rules.


The GetValidationAttributes method transforms into dictionary object. That dictionary object then can be applied to MVC controls.

The implementation of custom class:

 using System;  
 using System.Collections.Generic;  
 using System.ComponentModel.DataAnnotations;  
 using System.Web.Mvc;  
 /// <summary>  
 /// Data annotation model provider for dynamic controls  
 /// </summary>  
 public class AttributeSetModelValidatorProvider  
 : DataAnnotationsModelValidator  
 {  
   /// <summary>  
   /// The dynamic control for generating validation rules.  
   /// </summary>  
   private readonly AttributeSet Control;  
   /// <summary>  
   /// Initializes a new instance of the <see cref="AttributeSetModelValidatorProvider"/> class.  
   /// </summary>  
   /// <param name="metadata">The metadata.</param>  
   /// <param name="context">The context.</param>  
   /// <param name="attribute">The attribute.</param>  
   /// <param name="control">The dynamic control.</param>  
   public AttributeSetModelValidatorProvider(ModelMetadata metadata, ControllerContext context,  
     ValidationAttribute attribute, AttributeSet control)  
     : base(metadata, context, attribute)  
   {  
     Control = control;  
   }  
   /// <summary>  
   /// Retrieves a collection of client validation rules.  
   /// </summary>  
   /// <returns>  
   /// A collection of client validation rules.  
   /// </returns>  
   public override IEnumerable<ModelClientValidationRule> GetClientValidationRules()  
   {  
     var validationRules = new List<ModelClientValidationRule>();  
     #region " Different client side validation rule for dynamic controls "  
     if (Control.IsFieldValueRequired)  
     {  
       validationRules.Add(new ModelClientValidationRequiredRule(  
         String.Format("{0} cannot be empty.", Control.DisplayLabel)));  
     }  
     if (!String.IsNullOrEmpty(Control.ValidationRegEx))  
     {  
       validationRules.Add(new ModelClientValidationRegexRule(String.Format(  
         "Not a valid {0}", Control.DisplayLabel), Control.ValidationRegEx));  
     }  
     if (Control.MaxLength.HasValue && Control.MaxLength.Value > 0)  
     {  
       validationRules.Add(new ModelClientValidationMaxLengthRule(String.Format(  
         "{0} is having {1} allowed characters.", Control.DisplayLabel, Control.MaxLength), Control.MaxLength.Value));  
     }  
     if (Control.FieldName.ToLower().Contains("email"))  
     {  
       validationRules.Add(new ModelClientValidationRule()  
       {  
         ValidationType = "email",  
         ErrorMessage = "E Mail id is not valid."  
       });  
     }  
     ModelClientValidationRule dataType;  
     if (TryGettingTypeValidationRule(Control, out dataType))  
     {  
       validationRules.Add(dataType);  
     }  
     #endregion " Different client side validation rule for dynamic controls "  
     return validationRules;  
   }  
   /// <summary>  
   /// Try getting validation rule for dynamic control.  
   /// </summary>  
   /// <param name="dataField">The data field for dynamic control.</param>  
   /// <param name="dataTypeValidationRule">The data type validation rule.</param>  
   /// <returns>True, if Validation rule exist for dynamic control</returns>  
   private static bool TryGettingTypeValidationRule(AttributeSet dataField,  
     out ModelClientValidationRule dataTypeValidationRule)  
   {  
     dataTypeValidationRule = null;  
     var isSucess = false;  
     // Helper for creation object for validation rule  
     Func<string, string, ModelClientValidationRule> getValidationRule = (validationType, messageFormat) =>  
     {  
       return new ModelClientValidationRule  
       {  
         ValidationType = validationType,  
         ErrorMessage = String.Format(messageFormat, dataField.DisplayLabel)  
       };  
     };  
     // Rules based on data annotation and jQuery Unobtrusive validation  
     switch (dataField.FieldType.DisplayType)  
     {  
       case DbFieldType.ShortTime:  
       case DbFieldType.DateTime:  
       case DbFieldType.ShortDate:  
         dataTypeValidationRule = getValidationRule("date", "The field {0} must be a date.");  
         isSucess = true;  
         break;  
       case DbFieldType.Integer:  
       case DbFieldType.Decimal:  
       case DbFieldType.Money:  
       case DbFieldType.Percentage:  
         dataTypeValidationRule = getValidationRule("number", "The field {0} must be a number.");  
         isSucess = true;  
         break;  
     }  
     // Avoid type check for masked text boxes  
     if (!String.IsNullOrEmpty(dataField.MaskedTextFormat))  
     {  
       isSucess = false;  
     }  
     return isSucess;  
   }  
 }  


The implementation is really simple, has to override GetClientValidationRules function which would return a list of ModelClientValidationRule. This list can have multiple rules like ModelClientValidationRegexRuleModelClientValidationMaxLengthRule, etc. which are built in classes from MVC.

Comments

Popular posts from this blog

Making FluentValidation compatible with Swagger including Enum or fixed List support

FluentValidation is not directly compatible with Swagger API to validate models. But they do provide an interface through which we can compose Swagger validation manually. That means we look under FluentValidation validators and compose Swagger validator properties to make it compatible. More of all mapping by reading information from FluentValidation and setting it to Swagger Model Schema. These can be done on any custom validation from FluentValidation too just that proper schema property has to be available from Swagger. Custom validation from Enum/List values on FluentValidation using FluentValidation.Validators; using System.Collections.Generic; using System.Linq; using static System.String; /// <summary> /// Validator as per list of items. /// </summary> /// <seealso cref="PropertyValidator" /> public class FixedListValidator : PropertyValidator { /// <summary> /// Gets the valid items /// <...

Elegantly dealing with TimeZones in MVC Core / WebApi

In any new application handling TimeZone/DateTime is mostly least priority and generally, if someone is concerned then it would be handled by using DateTime.UtcNow on codes while creating current dates and converting incoming Date to UTC to save on servers. Basically, the process is followed by saving DateTime to UTC format in a database and keep converting data to native format based on user region or single region in the application's presentation layer. The above is tedious work and have to be followed religiously. If any developer misses out the manual conversion, then that area of code/view would not work. With newer frameworks, there are flexible ways to deal/intercept incoming or outgoing calls to simplify conversion of TimeZones. These are steps/process to achieve it. 1. Central code for storing user's state about TimeZone. Also, central code for conversion logic based on TimeZones. 2. Dependency injection for the above class to ...

Handling JSON DateTime format on Asp.Net Core

This is a very simple trick to handle JSON date format on AspNet Core by global settings. This can be applicable for the older version as well. In a newer version by default, .Net depends upon Newtonsoft to process any JSON data. Newtonsoft depends upon Newtonsoft.Json.Converters.IsoDateTimeConverter class for processing date which in turns adds timezone for JSON data format. There is a global setting available for same that can be adjusted according to requirement. So, for example, we want to set default formatting to US format, we just need this code. services.AddMvc() .AddJsonOptions(options => { options.SerializerSettings.DateTimeZoneHandling = "MM/dd/yyyy HH:mm:ss"; });

Storing and restoring Kendo Grid state from Database

There is no any built in way to store entire grid state into database and restore back again with all filters, groups, aggregates, page and page size. At first, I was trying to restore only filters by looking through DataSourceRequest. DataSourceRequest is kind of communication medium between client and server for the operation we do on grid. All the request comes via DataSourceRequest. In previous approach, I was trying to store IFileDescriptor interface which come with class FileDescriptor by looping through filters and serializing into string for saving into database but this IFileDescriptor can also contain CompositeFilterDescriptor which can be nested in nested object which are very tricky to handle. So, I had decompiled entire Kendo.MVC library and found out that all Kendo MVC controls are derived from “JsonObject”. It is there own implementation with ”Serialize” abstract function and “ToJson” function. In controls they are overriding “Serialize” method which depicts t...

Trim text in MVC Core through Model Binder

Trimming text can be done on client side codes, but I believe it is most suitable on MVC Model Binder since it would be at one place on infrastructure level which would be free from any manual intervention of developer. This would allow every post request to be processed and converted to a trimmed string. Let us start by creating Model binder using Microsoft.AspNetCore.Mvc.ModelBinding; using System; using System.Threading.Tasks; public class TrimmingModelBinder : IModelBinder { private readonly IModelBinder FallbackBinder; public TrimmingModelBinder(IModelBinder fallbackBinder) { FallbackBinder = fallbackBinder ?? throw new ArgumentNullException(nameof(fallbackBinder)); } public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } var valueProviderResult = bindingContext.ValueProvider.GetValue(bin...

LDAP with ASP.Net Identity Core in MVC with project.json

Lightweight Directory Access Protocol (LDAP), the name itself explain it. An application protocol used over an IP network to access the distributed directory information service. The first and foremost thing is to add references for consuming LDAP. This has to be done by adding reference from Global Assembly Cache (GAC) into project.json "frameworks": { "net461": { "frameworkAssemblies": { "System.DirectoryServices": "4.0.0.0", "System.DirectoryServices.AccountManagement": "4.0.0.0" } } }, These  System.DirectoryServices  and  System.DirectoryServices.AccountManagement  references are used to consume LDAP functionality. It is always better to have an abstraction for irrelevant items in consuming part. For an example, the application does not need to know about PrincipalContext or any other dependent items from those two references to make it extensible. So, we can begin wi...

Configuring Ninject, Asp.Net Identity UserManager, DataProtectorTokenProvider with Owin

It can be bit tricky to configure both Ninject and Asp.Net Identity UserManager if some value is expected from DI to configure UserManager. We will look into configuring both and also use OwinContext to get UserManager. As usual, all configuration need to be done on Startup.cs. It is just a convention but can be used with different name, the important thing is to decorate class with following attribute to make it Owin start-up: [assembly: OwinStartup(typeof(MyProject.Web.Startup))] Ninject configuration Configuring Ninject kernel through method which would be used to register under Owin. Startup.cs public IKernel CreateKernel() { var kernel = new StandardKernel(); try { //kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); // TODO: Put any other injection which are required. return kernel; } catch { kernel.Dispose(); thro...

Data seed for the application with EF, MongoDB or any other ORM.

Most of ORMs has moved to Code first approach where everything is derived/initialized from codes rather than DB side. In this situation, it is better to set data through codes only. We would be looking through simple technique where we would be Seeding data through Codes. I would be using UnitOfWork and Repository pattern for implementing Data Seeding technique. This can be applied to any data source MongoDB, EF, or any other ORM or DB. Things we would be doing. - Creating a base class for easy usage. - Interface for Seed function for any future enhancements. - Individual seed classes. - Configuration to call all seeds. - AspNet core configuration to Seed data through Seed configuration. Creating a base class for easy usage public abstract class BaseSeed<TModel> where TModel : class { protected readonly IMyProjectUnitOfWork MyProjectUnitOfWork; public BaseSeed(IMyProjectUnitOfWork MyProjectUnitOfWork) { ...

A wrapper implementation for Kendo Grid usage

A wrapper implementation for any heavily used item is always a good practice. Whatever is not written by us and used at a lot of places should be wrapped within specific functionality to keep it future proof and easily changeable. This also encourages DRY principle to keep our common setting at a central place. Kendo UI items are enormous in configuration, one of an issue I find people keep repeating codes for Kendo Grid configuration. They have built very flexible system to have any configuration, but in most of the cases, we do not need all of those complicated configuration. We would try to see a simpler configuration of same. The actual core implementation is bit complex, but we do not have to bother about it once done since the focus is just on usage only. I recommend doing this practice for as simple as jQuery events, form handling or as simple as any notification system. This just won't make things simple but makes codes much more manageable, easy understand, read or open f...

Using Redis distributed cache in dotnet core with helper extension methods

Redis cache is out process cache provider for a distributed environment. It is popular in Azure Cloud solution, but it also has a standalone application to operate upon in case of small enterprises application. How to install Redis Cache on a local machine? Redis can be used as a local cache server too on our local machines. At first install, Chocolatey https://chocolatey.org/ , to make installation of Redis easy. Also, the version under Chocolatey supports more commands and compatible with Official Cache package from Microsoft. After Chocolatey installation hit choco install redis-64 . Once the installation is done, we can start the server by running redis-server . Distributed Cache package and registration dotnet core provides IDistributedCache interface which can be overrided with our own implementation. That is one of the beauties of dotnet core, having DI implementation at heart of framework. There is already nuget package available to override IDistributedCache i...