Skip to main content

TypeScript introduction

In newer web application development we are more and more diving into client side scripts. Slowly AJAX calls are completely or at least mostly replacing regular post backs for rich user experience. Somewhere we are dealing HTMLs returns or just interacting with model but with time we are ending up writing lot of JavaScript on pages or dedicated JavaScript files.

After starting project, in short of span of time we are ending up with lot of scripts. Writing those script itself is problematic process as we do not get any type safety and on execution only we can know we have missed typed or added some variable in place of some other type. Mostly through comments only we are understanding the code association with pages unless we put JavaScript in respective pages or with naming JS according to respective page names. I know we can use some OOPS concept or JS framework like Backbone, Knockout etc. to manage things through MVC or MVVM approach. Those are really good way to handle JavaScript but we do not get any type safety as it is JavaScript.

I am a really great fan of OOPS concept but have never tried on JavaScript. May be I do not understand JavaScript much or too lazy to write extra script to manage it. I was looking for solution for easy and happy scripting then I come to know TypeScript.

What is TypeScript?

TypeScript is an open source scripting language and super-set of JavaScript with type safety. That means whatever JavaScript script code we have written till now is TypeScript only. Oh! what is the point then if it is TypeScript and how that type safety works. JavaScript is subset of TypeScript, where all JavaScript code will workout without doing anything extra but for taking advantage of TypeScript we can follow there guideline.

What are guidelines of TypeScript and how to code?

It is more of like JScript .NET or could be related with C#. It is pretty simple to catch up with TypeScript if we have worked on any OOP languages. It is having classes, interfaces, modules (namespace), generics, type declaration, most lovely compile time checks etc. 
For getting started with we can look into http://www.typescriptlang.org/Playground

How it works?

The extension of JavaScript is ts. It comes with tsc compiler which could be downloaded from http://www.typescriptlang.org/#Download. The compiler compiles the ts file into JavaScript files to have compatibility with any browser and doing every hard work for us. 

TypeScript helping tools 

For taking advantage of various JavaScript libraries, it is having huge list of type definition which could be found on https://github.com/borisyankov/DefinitelyTyped.
By importing those we can have type safety checks. Ex: jQuery has lot of functionality with various types declared in it. By importing the definition we can have intellisense with type checks.
Web Essential is an extension of Visual Studio through which we can have JavaScript file generation at a time of writing only, which you might have seen from Playground link.

TypeLITE is a tool to generate up TypeScript interfaces according to server side classes or models.

Comments

Popular posts from this blog

Data seed for the application with EF, MongoDB or any other ORM.

Most of ORMs has moved to Code first approach where everything is derived/initialized from codes rather than DB side. In this situation, it is better to set data through codes only. We would be looking through simple technique where we would be Seeding data through Codes. I would be using UnitOfWork and Repository pattern for implementing Data Seeding technique. This can be applied to any data source MongoDB, EF, or any other ORM or DB. Things we would be doing. - Creating a base class for easy usage. - Interface for Seed function for any future enhancements. - Individual seed classes. - Configuration to call all seeds. - AspNet core configuration to Seed data through Seed configuration. Creating a base class for easy usage public abstract class BaseSeed<TModel> where TModel : class { protected readonly IMyProjectUnitOfWork MyProjectUnitOfWork; public BaseSeed(IMyProjectUnitOfWork MyProjectUnitOfWork) { ...

Making FluentValidation compatible with Swagger including Enum or fixed List support

FluentValidation is not directly compatible with Swagger API to validate models. But they do provide an interface through which we can compose Swagger validation manually. That means we look under FluentValidation validators and compose Swagger validator properties to make it compatible. More of all mapping by reading information from FluentValidation and setting it to Swagger Model Schema. These can be done on any custom validation from FluentValidation too just that proper schema property has to be available from Swagger. Custom validation from Enum/List values on FluentValidation using FluentValidation.Validators; using System.Collections.Generic; using System.Linq; using static System.String; /// <summary> /// Validator as per list of items. /// </summary> /// <seealso cref="PropertyValidator" /> public class FixedListValidator : PropertyValidator { /// <summary> /// Gets the valid items /// <...

Elegantly dealing with TimeZones in MVC Core / WebApi

In any new application handling TimeZone/DateTime is mostly least priority and generally, if someone is concerned then it would be handled by using DateTime.UtcNow on codes while creating current dates and converting incoming Date to UTC to save on servers. Basically, the process is followed by saving DateTime to UTC format in a database and keep converting data to native format based on user region or single region in the application's presentation layer. The above is tedious work and have to be followed religiously. If any developer misses out the manual conversion, then that area of code/view would not work. With newer frameworks, there are flexible ways to deal/intercept incoming or outgoing calls to simplify conversion of TimeZones. These are steps/process to achieve it. 1. Central code for storing user's state about TimeZone. Also, central code for conversion logic based on TimeZones. 2. Dependency injection for the above class to ...

Centralized model validation both for MVC/WebApi and SPA client-side validation using FluentValidation

Validation is one of the crucial parts of any application. It has to validate on both client side and server side requests. What are target features or implementation from this article? Model validation for any given model. Centralized/One code for validation on both server-side and client-side. Automatic validation of model without writing any extra codes on/under actions for validation.  NO EXTRA/ANY codes on client-side to validate any form. Compatible with SPA. Can be compatible with any client-side validation framework/library. Like Angular Reactive form validation or any jquery validation libraries. Tools used in the implementation? FluentValidation : I feel DataAnnotation validation are excellent and simple to use, but in case of complex validation or writing any custom validations are always tricker and need to write a lot of codes to achieve whereas FluentValidations are simple even in case of complex validation. Generally, we need to validate inc...

Handling JSON DateTime format on Asp.Net Core

This is a very simple trick to handle JSON date format on AspNet Core by global settings. This can be applicable for the older version as well. In a newer version by default, .Net depends upon Newtonsoft to process any JSON data. Newtonsoft depends upon Newtonsoft.Json.Converters.IsoDateTimeConverter class for processing date which in turns adds timezone for JSON data format. There is a global setting available for same that can be adjusted according to requirement. So, for example, we want to set default formatting to US format, we just need this code. services.AddMvc() .AddJsonOptions(options => { options.SerializerSettings.DateTimeZoneHandling = "MM/dd/yyyy HH:mm:ss"; });

Kendo MVC Grid DataSourceRequest with AutoMapper

Kendo Grid does not work directly with AutoMapper but could be managed by simple trick using mapping through ToDataSourceResult. The solution works fine until different filters are applied. The problems occurs because passed filters refer to view model properties where as database model properties are required after AutoMapper is implemented. So, the plan is to intercept DataSourceRequest  and modify names based on database model. To do that we are going to create implementation of  CustomModelBinderAttribute to catch calls and have our own implementation of DataSourceRequestAttribute from Kendo MVC. I will be using same source code from Kendo but will replace column names for different criteria for sort, filters, group etc. Let's first look into how that will be implemented. public ActionResult GetRoles([MyDataSourceRequest(GridId.RolesUserGrid)] DataSourceRequest request) { if (request == null) { throw new Argume...

Channel, ChannelReader and ChannelWriter to manage data streams in multi-threading environment

I came across Channel class while working with SignalR which looks really interesting. By looking into NuGet packages ( https://www.nuget.org/packages/System.Threading.Channels ), it seems just 4 months old. The Channel class provides infrastructure to have multiple reads and write simuletensely through it's Reader and Writer properties. This is where it is handy in case of SignalR where data streaming needs to be done but is not just limited to that but wherever something needs to be read/write/combination of both in a multi-threading environment. In my case with SignalR, I had to stream stock data at a regular interval of time. public ChannelReader<StockData> StreamStock() { var channel = Channel.CreateUnbounded<StockData>(); _stockManager.OnStockData = stockData => { channel.Writer.TryWrite(stockData); }; return channel.Reader; } The SignalR keeps return type of ChannelReader<StockData> open so that whatev...

LDAP with ASP.Net Identity Core in MVC with project.json

Lightweight Directory Access Protocol (LDAP), the name itself explain it. An application protocol used over an IP network to access the distributed directory information service. The first and foremost thing is to add references for consuming LDAP. This has to be done by adding reference from Global Assembly Cache (GAC) into project.json "frameworks": { "net461": { "frameworkAssemblies": { "System.DirectoryServices": "4.0.0.0", "System.DirectoryServices.AccountManagement": "4.0.0.0" } } }, These  System.DirectoryServices  and  System.DirectoryServices.AccountManagement  references are used to consume LDAP functionality. It is always better to have an abstraction for irrelevant items in consuming part. For an example, the application does not need to know about PrincipalContext or any other dependent items from those two references to make it extensible. So, we can begin wi...

Trim text in MVC Core through Model Binder

Trimming text can be done on client side codes, but I believe it is most suitable on MVC Model Binder since it would be at one place on infrastructure level which would be free from any manual intervention of developer. This would allow every post request to be processed and converted to a trimmed string. Let us start by creating Model binder using Microsoft.AspNetCore.Mvc.ModelBinding; using System; using System.Threading.Tasks; public class TrimmingModelBinder : IModelBinder { private readonly IModelBinder FallbackBinder; public TrimmingModelBinder(IModelBinder fallbackBinder) { FallbackBinder = fallbackBinder ?? throw new ArgumentNullException(nameof(fallbackBinder)); } public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } var valueProviderResult = bindingContext.ValueProvider.GetValue(bin...

Using Redis distributed cache in dotnet core with helper extension methods

Redis cache is out process cache provider for a distributed environment. It is popular in Azure Cloud solution, but it also has a standalone application to operate upon in case of small enterprises application. How to install Redis Cache on a local machine? Redis can be used as a local cache server too on our local machines. At first install, Chocolatey https://chocolatey.org/ , to make installation of Redis easy. Also, the version under Chocolatey supports more commands and compatible with Official Cache package from Microsoft. After Chocolatey installation hit choco install redis-64 . Once the installation is done, we can start the server by running redis-server . Distributed Cache package and registration dotnet core provides IDistributedCache interface which can be overrided with our own implementation. That is one of the beauties of dotnet core, having DI implementation at heart of framework. There is already nuget package available to override IDistributedCache i...