Skip to main content

Extending WhenAny,WhenAll like feature to get the task as soon as it completes

While working with list of task, we only get two built in options to retrieve task info for completion. First is WhenAny and other WhenAll. WhenAny can be used when any of the given tasks gets completed first where as WhenAll notifies when all of the task gets completed. There is no option to know each task as soon as it gets completed.

I was going through Pro Asynchronous Programming with .NET book which shows better option to deal with notification of task completion efficiently.

To return the task on completion, TaskCompletionSource will be used . Through TaskCompletionSource, Task could be created and with provided inbuilt methods outcome can be handled. This can be also helpful to wrap up legacy codes as well to built up task.

Let's directly look in to core code to return task based on completion.

 /// <summary>  
     /// Order by completion for Task.  
     /// </summary>  
     /// <typeparam name="T">Return type</typeparam>  
     /// <param name="tasks">The tasks.</param>  
     /// <returns></returns>  
     /// <exception cref="System.ArgumentNullException">tasks</exception>  
     /// <exception cref="System.ArithmeticException">Must have at least one task</exception>  
     public static IEnumerable<Task<T>> OrderByCompletion<T>(this IEnumerable<Task<T>> tasks)  
     {  
       if (tasks == null)  
       {  
         throw new ArgumentNullException("tasks");  
       }  
       var allTasks = tasks.ToList();  
       if (!allTasks.Any())  
       {  
         throw new ArithmeticException("Must have at least one task");  
       }  
       // Create list of TaskCompletionSource  
       var taskCompletionSources = new List<TaskCompletionSource<T>>(allTasks.Count);  
       for (int tskCtr = 0; tskCtr < allTasks.Count; tskCtr++)  
       {  
         taskCompletionSources.Add(new TaskCompletionSource<T>());  
       }  
       int nextCompletdTask = -1;  
       for (int nTask = 0; nTask < allTasks.Count; nTask++)  
       {  
         taskCompletionSources[nTask] = new TaskCompletionSource<T>();  
         allTasks[nTask].ContinueWith(tsk =>  
         {  
           // Thread safe increment of variable  
           int taskToComplete = Interlocked.Increment(ref nextCompletdTask);  
           switch (tsk.Status)  
           {  
             case TaskStatus.Canceled:  
               taskCompletionSources[taskToComplete].SetCanceled();  
               break;  
             case TaskStatus.Faulted:  
               // InnerException is used to avoid AggregateException  
               taskCompletionSources[taskToComplete].SetException(tsk.Exception.InnerException);  
               break;  
             case TaskStatus.RanToCompletion:  
               taskCompletionSources[taskToComplete].SetResult(tsk.Result);  
               break;  
           }  
         }, TaskContinuationOptions.ExecuteSynchronously);  
       }  
       return taskCompletionSources.Select(src => src.Task);  
     }  

In the above code at first, list of TaskCompletionSource created based on number of tasks. Afterwards, task continuation is done for each task and result of task is transformed with the help of methods from TaskCompletionSource.

You might worried about TaskContinuationOptions.ExecuteSynchronously .This is on task continuation and the block of code just composing task which is perfectly fine and it won't block the code.

Now above extension function can be attached with any list of task by using OrderByCompletion. Let's see with example:

     private static void Main(string[] args)  
     {  
       var uriList = new List<string>()  
       {  
         "http://www.google.com",  
         "http://www.yahoo.com",  
         "http://www.msn.com",  
         "http://www.bing.com",  
         "http://vikutech.blogspot.in",  
       };  
       var tasks = GetDownloadedContentTaskAsync(uriList);  
       foreach (var task in tasks.OrderByCompletion())  
       {  
         Console.WriteLine(task.Result);  
       }  
     }  
     /// <summary>  
     /// Gets the downloaded content asynchronous as task.  
     /// </summary>  
     /// <param name="urlString">List of URLs.</param>  
     /// <returns>Content of downloaded string</returns>  
     private static IEnumerable<Task<string>> GetDownloadedContentTaskAsync(IEnumerable<string> urlStrings)  
     {  
       foreach (var url in urlStrings)  
       {  
         var client = new WebClient();  
         yield return client.DownloadStringTaskAsync(new Uri(url));  
         // TODO: Check sequence of program sequence  
         //yield return Task<string>.Factory.StartNew(() =>  
         //{  
         //  Task.Delay(4000);  
         //  return url;  
         //});  
       }  
     }  

The code is creating list of task and looping it to get the results from task when any task is getting completed in any sequence.
Source code: https://www.dropbox.com/s/9xemmci3ey40qug/TaskCompletionSequence.zip

Comments

Popular posts from this blog

Kendo MVC Grid DataSourceRequest with AutoMapper

Kendo Grid does not work directly with AutoMapper but could be managed by simple trick using mapping through ToDataSourceResult. The solution works fine until different filters are applied. The problems occurs because passed filters refer to view model properties where as database model properties are required after AutoMapper is implemented. So, the plan is to intercept DataSourceRequest  and modify names based on database model. To do that we are going to create implementation of  CustomModelBinderAttribute to catch calls and have our own implementation of DataSourceRequestAttribute from Kendo MVC. I will be using same source code from Kendo but will replace column names for different criteria for sort, filters, group etc. Let's first look into how that will be implemented. public ActionResult GetRoles([MyDataSourceRequest(GridId.RolesUserGrid)] DataSourceRequest request) { if (request == null) { throw new Argume...

Trim text in MVC Core through Model Binder

Trimming text can be done on client side codes, but I believe it is most suitable on MVC Model Binder since it would be at one place on infrastructure level which would be free from any manual intervention of developer. This would allow every post request to be processed and converted to a trimmed string. Let us start by creating Model binder using Microsoft.AspNetCore.Mvc.ModelBinding; using System; using System.Threading.Tasks; public class TrimmingModelBinder : IModelBinder { private readonly IModelBinder FallbackBinder; public TrimmingModelBinder(IModelBinder fallbackBinder) { FallbackBinder = fallbackBinder ?? throw new ArgumentNullException(nameof(fallbackBinder)); } public Task BindModelAsync(ModelBindingContext bindingContext) { if (bindingContext == null) { throw new ArgumentNullException(nameof(bindingContext)); } var valueProviderResult = bindingContext.ValueProvider.GetValue(bin...

Using Redis distributed cache in dotnet core with helper extension methods

Redis cache is out process cache provider for a distributed environment. It is popular in Azure Cloud solution, but it also has a standalone application to operate upon in case of small enterprises application. How to install Redis Cache on a local machine? Redis can be used as a local cache server too on our local machines. At first install, Chocolatey https://chocolatey.org/ , to make installation of Redis easy. Also, the version under Chocolatey supports more commands and compatible with Official Cache package from Microsoft. After Chocolatey installation hit choco install redis-64 . Once the installation is done, we can start the server by running redis-server . Distributed Cache package and registration dotnet core provides IDistributedCache interface which can be overrided with our own implementation. That is one of the beauties of dotnet core, having DI implementation at heart of framework. There is already nuget package available to override IDistributedCache i...

Making FluentValidation compatible with Swagger including Enum or fixed List support

FluentValidation is not directly compatible with Swagger API to validate models. But they do provide an interface through which we can compose Swagger validation manually. That means we look under FluentValidation validators and compose Swagger validator properties to make it compatible. More of all mapping by reading information from FluentValidation and setting it to Swagger Model Schema. These can be done on any custom validation from FluentValidation too just that proper schema property has to be available from Swagger. Custom validation from Enum/List values on FluentValidation using FluentValidation.Validators; using System.Collections.Generic; using System.Linq; using static System.String; /// <summary> /// Validator as per list of items. /// </summary> /// <seealso cref="PropertyValidator" /> public class FixedListValidator : PropertyValidator { /// <summary> /// Gets the valid items /// <...

Handling JSON DateTime format on Asp.Net Core

This is a very simple trick to handle JSON date format on AspNet Core by global settings. This can be applicable for the older version as well. In a newer version by default, .Net depends upon Newtonsoft to process any JSON data. Newtonsoft depends upon Newtonsoft.Json.Converters.IsoDateTimeConverter class for processing date which in turns adds timezone for JSON data format. There is a global setting available for same that can be adjusted according to requirement. So, for example, we want to set default formatting to US format, we just need this code. services.AddMvc() .AddJsonOptions(options => { options.SerializerSettings.DateTimeZoneHandling = "MM/dd/yyyy HH:mm:ss"; });

Kendo MVC Grid DataSourceRequest with AutoMapper - Advance

The actual process to make DataSourceRequest compatible with AutoMapper was explained in my previous post  Kendo MVC Grid DataSourceRequest with AutoMapper , where we had created custom model binder attribute and in that property names were changed as data models. In this post we will be looking into using AutoMapper's Queryable extension to retrieve the results based on selected columns. When  Mapper.Map<RoleViewModel>(data)  is called it retrieves all column values from table. The Queryable extension provides a way to retrieve only selected columns from table. In this particular case based on properties of  RoleViewModel . The previous approach that we implemented is perfect as far as this article ( 3 Tips for Using Telerik Data Access and AutoMapper ) is concern about performance where it states: While this functionality allows you avoid writing explicit projection in to your LINQ query it has the same fatal flaw as doing so - it prevents the qu...

Storing and restoring Kendo Grid state from Database

There is no any built in way to store entire grid state into database and restore back again with all filters, groups, aggregates, page and page size. At first, I was trying to restore only filters by looking through DataSourceRequest. DataSourceRequest is kind of communication medium between client and server for the operation we do on grid. All the request comes via DataSourceRequest. In previous approach, I was trying to store IFileDescriptor interface which come with class FileDescriptor by looping through filters and serializing into string for saving into database but this IFileDescriptor can also contain CompositeFilterDescriptor which can be nested in nested object which are very tricky to handle. So, I had decompiled entire Kendo.MVC library and found out that all Kendo MVC controls are derived from “JsonObject”. It is there own implementation with ”Serialize” abstract function and “ToJson” function. In controls they are overriding “Serialize” method which depicts t...

Elegantly dealing with TimeZones in MVC Core / WebApi

In any new application handling TimeZone/DateTime is mostly least priority and generally, if someone is concerned then it would be handled by using DateTime.UtcNow on codes while creating current dates and converting incoming Date to UTC to save on servers. Basically, the process is followed by saving DateTime to UTC format in a database and keep converting data to native format based on user region or single region in the application's presentation layer. The above is tedious work and have to be followed religiously. If any developer misses out the manual conversion, then that area of code/view would not work. With newer frameworks, there are flexible ways to deal/intercept incoming or outgoing calls to simplify conversion of TimeZones. These are steps/process to achieve it. 1. Central code for storing user's state about TimeZone. Also, central code for conversion logic based on TimeZones. 2. Dependency injection for the above class to ...

Global exception handling and custom logging in AspNet Core with MongoDB

In this, we would be looking into logging and global exception handling in the AspNet Core application with proper registration of logger and global exception handling. Custom logging The first step is to create a data model that we want to save into DB. Error log Data model These are few properties to do logging which could be extended or reduced based on need. public class ErrorLog { /// <summary> /// Gets or sets the Error log identifier. /// </summary> /// <value> /// The Error log identifier. /// </value> [BsonRepresentation(BsonType.ObjectId)] public ObjectId Id { get; set; /// <summary> /// Gets or sets the date. /// </summary> /// <value> /// The date. /// </value> public DateTime Date { get; set; } /// <summary> /// Gets or sets the thread. /// </summary> ...

Channel, ChannelReader and ChannelWriter to manage data streams in multi-threading environment

I came across Channel class while working with SignalR which looks really interesting. By looking into NuGet packages ( https://www.nuget.org/packages/System.Threading.Channels ), it seems just 4 months old. The Channel class provides infrastructure to have multiple reads and write simuletensely through it's Reader and Writer properties. This is where it is handy in case of SignalR where data streaming needs to be done but is not just limited to that but wherever something needs to be read/write/combination of both in a multi-threading environment. In my case with SignalR, I had to stream stock data at a regular interval of time. public ChannelReader<StockData> StreamStock() { var channel = Channel.CreateUnbounded<StockData>(); _stockManager.OnStockData = stockData => { channel.Writer.TryWrite(stockData); }; return channel.Reader; } The SignalR keeps return type of ChannelReader<StockData> open so that whatev...