Thursday, July 21, 2016

Microsoft Service bus: Queue, Topic and Event Hub

Selecting between Queue, Topic and Event Hub:


Usage:

Queue:

The scenario for queue is almost clear. Each message will be receive only by 1 receiver.
Of course it is possible to have multiple senders and multiple receivers, but the key point is that each message will receive only to one of those receivers.

A simple example of using queues is POS (point of sale) systems. POS terminals produce some data with different load/times and the inventory management system has to process all of them. The inventory system and the Terminals are loosely coupled there might be different softwares in each terminal, but there is only 1 receiver on the other end.


Topic-subscription:

This type of queue will be used when a single message has to be received by multiple receivers. It is also possible to differentiate between receivers so a group of receivers will only receive special types of messages.
The same as queues, there can be multiple instances of sender/receivers (multiple receiver per subscription) is available, but when a receiver reads data from 1 subscription, the others cannot read it again from the same subscription.
A simple example for this scenario is logging/Search. You want to have an instance of your data to use in a logging (search) system, but there has to be no relation between your realtime system with the logging system rationally.

The second scenario is also possible when you want to divide your messages into different lines. Like when you want to create a priority queue, or when a set of producers produce data for different systems. 
A simple example for this would be when you create data in a way and process them in different manners. For instance if there is an ordering system registers orders for different companies, then each company only needs to know about its orders.

Event-Hub:

Event Hub is simply a big stream of data. In a way, Event hub is the answer for the same issues that we addressed in the Topics. There are multiple consumers for each message, but the big difference is that Event-hub has been implemented for maximizing the throughput.
Each Topic in event hub will multiple partitions (normally between 4 to 16) and 1 instance of the client will read from each partition. If any of the clients goes down or there are less clients that the partitions, clients will compete against each other to access to more partitions.

A simple scenario for this solution is logging players actions in a popular game. 

How they Access data

Queue:

There is only 1 line of messages. Clients wait for a message and when a new message received, only one of them will process it.

Topic-subscription:

There is only multiple lines of subscriptions. Clients of a subscription wait for data and the same scenario as queues happens here. It is important to note that there is no relation between data in different subscriptions.

Event-Hub:

Clients are responsible for the data that they read. Meaning that it is possible for different clients to read a message multiple time. Messages would not remove from system until a reasonable time (some days), or depending on size of data. 
Each client will ask the hub for data after special message (check point) so it is very possible to see some messages has been read multiple time when a client restarts.
There is a concept of Consumer Groups which is almost the same as subscription in the topic base service bus.

How to manage failure

Topic-subscription:

There is 2 ways of receiving a message:

1- PeekLock (default)

Data will be read by the client and it will be locked for some time. When the client finished its work, it simply tells to the broker that and the broker will remove the item from queue.
If an error happens in the process client can abandon the message, therefore another client would read the same message later on in case that the error was temporary.
Also, if the client crashes in the middle of the process, the lock will be time outed and the message will be readable for another client.

2-ReceiveAndDelete

The broker doesn't care about the probable errors. It simply say the message will be read 1 times max!

Event-Hub:

Messages will be in the hub for a long time, so we can be sure that data will be read by clients at least 1 time
Each client will set a check point after a portion of time (or any rule that the clients decides) and if it restarts, it will continue reading from that checkpoint again. So the client has to take care of duplicates it self.

Installation Environment

Queue and Topic-subscription are available on both Microsoft Azure (cloud) and on premise but Event-Hub is only available on the Azure.

Read More
Azure Event Hubs overview

Monday, July 18, 2016

Creating an email template engine - Part 1

Introduction

Sending emails is always one of the jobs that most customers want. Sometimes they want to send a welcome letter or some notifications or even a newsletter for the customers and so they ask for a module that they can use. Writing a method for sending emails takes less than 10 seconds these days (
SmtpClient.Send :) ) but creating a good content that can be flexible enough for the client to use and at the same time not much complex that make them suffer is a bit tricky. So In this post I will show you how to do it in a simple way.

Dividing the content

If you ask a client what content will you have in your emails, they will probably say everything is possible but they usually want to have the same design on most of their emails. It is a way to show that all of those emails are from the same company. Also, the one who writes the email content usually doesn't have any knowledge about the design, and you don't want to let him mess up with the CSS.
So in short you have these contents inside your email engine:
1- A HTML/CSS design form
2- Some placeholders for special data like customer name, Date, etc.
3- Some content that changes by the editor

Using MVC engine

Usually whenever we talk about HTML/CSS design, we think of how we do it using MVC, because as web developers that is what we do. By using MVC, we always separate our business from the view. Right? We never write anything inside the view so if the customer doesn't like the look of his pages, or if there was a change in theme or etc. it could be possible without us doing anything. And let me add that doing the front-end usually needs a good eye to see what is pretty and I myself don't have it :)
So, why not using MVC engine to create our email body? It is only a simple HTML page noting more.

Replacing the placeholders 

String processing is very easy in all programming languages, you just need to define strings that no one will use in a text. Something simple like having names with double hash in both sides like ##SomeName## will probably do the task. You can also use the simple string.Replace method but for the sake of both speed and flexibility I choose to work with RegEx library most of the time.

Implementing the code

Rendering the MVC view to string:

There is a very good post by Rick Strahl . He basically wrote a class to help him do some of the stuff easily:
public class ViewRenderer
{

protected ControllerContext Context { get; set; }

public ViewRenderer(ControllerContext controllerContext = null)
{
if (controllerContext == null)
{
if (HttpContext.Current != null)
controllerContext = CreateController().ControllerContext;
else
throw new InvalidOperationException(
"ViewRenderer must run in the context of an ASP.NET " +
"Application and requires HttpContext.Current to be present.");
}
Context = controllerContext;
}

public string RenderView(string viewPath, object model)
{
return RenderViewToStringInternal(viewPath, model, false);
}

public string RenderPartialView(string viewPath, object model)
{
return RenderViewToStringInternal(viewPath, model, true);
}

public static string RenderView(string viewPath, object model,
ControllerContext controllerContext)
{
ViewRenderer renderer = new ViewRenderer(controllerContext);
return renderer.RenderView(viewPath, model);
}

public static string RenderPartialView(string viewPath, object model,
ControllerContext controllerContext)
{
ViewRenderer renderer = new ViewRenderer(controllerContext);
return renderer.RenderPartialView(viewPath, model);
}

protected string RenderViewToStringInternal(string viewPath, object model,
bool partial = false)
{
ViewEngineResult viewEngineResult = null;
if (partial)
viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath);
else
viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null);

if (viewEngineResult == null)
throw new FileNotFoundException("View did not find!");

var view = viewEngineResult.View;
Context.Controller.ViewData.Model = model;
string result = null;
using (var sw = new StringWriter())
{
var ctx = new ViewContext(Context, view,
Context.Controller.ViewData,
Context.Controller.TempData,
sw);
view.Render(ctx, sw);
result = sw.ToString();
}

return result;
}

public static T CreateController(RouteData routeData = null)
where T : Controller, new()
{
T controller = new T();


HttpContextBase wrapper = null;
if (HttpContext.Current != null)
wrapper = new HttpContextWrapper(System.Web.HttpContext.Current);

if (routeData == null)
routeData = new RouteData();

if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller"))
routeData.Values.Add("controller", controller.GetType().Name
.ToLower()
.Replace("controller", ""));

controller.ControllerContext = new ControllerContext(wrapper, routeData, controller);
return controller;
}
}

This class works very good giving you the opportunity to render your Razor view to HTML string.

Creating the viewModel
To work with MVC, you need to have a model to pass to the view and then in the view the designer knows what he has to create a good view.

Create a EmailTemplateBase so you can use it for different views depending on what you need to do and put your base stuff inside that. something like:
public class EmailTemplateBase
{
 public string Markup { get; set; }
public string BodyTextMarkup { get; set; }
}

Then you can inherit from it to have your special view or just use it as your view model.
Creating the Controller
Since we are using the .Net MVC, it is not possible to generate stuff without using a controller. So just create a simple controller:
public class mycontroller
{
public ActionResult Index(EmailTemplateBase model)
{
 return View(model);
}
}

Creating the view
It is the simple Razor MVC view and the viewmodel is the one that you've already created so I won't bother writing that :)

Replacing the name value tokens

I myself prefer to have this method in a helper class with extension methods so I can use it like text.ReplaceTokens(collection) but you can use it as a simple method without extentions.
public static string ReplaceTokens(this string text, NameValueCollection nameValueCollection)
{
foreach (var key in nameValueCollection.AllKeys)
{
 var replacementText = nameValueCollection[key] ?? "";
 text = Regex.Replace(text, key, replacementText, RegexOptions.IgnoreCase);
}
return text;
}

Creating the Email Body
To create the email body, you need to get the body markup from the DB where you put bodytext that you already took from the editor. also you need to have a list of namevalues that you want to replace somewhere. The namevalue collection has to have everything that is possible for the editor to use, but it has to be ok for him not using some or all of them.

public string CreateEmail(string bodyMarkupText, NameValueCollection tokenCollection)
{
 var url = "~/views/myPage/Index.cshtml";
 var model = new ReceiptViewModel()
 {
  BodyTextMarkup = bodyMarkupText
 };
 var bodyText = ViewRenderer.RenderView(url, model, null);

 return bodyText.ReplaceTokens(tokenCollection);
}

Now you just need to put the string as the email body and send the email to the user :)
Easy right?
Have fun :)

Thursday, July 14, 2016

EPiServer for beginers, where is version history

Version history is one of the basic features of any CMS and of course EPiServer has complete support over all of your pages and commerce data.
You might say that everyone knows that EPiServer has this feature, but to be honest I didn't expect to find it inside the gadgets pane the first time, so I thought why not mentioning it for beginners :)


So to see the version history follow these steps:
1-  In the editors page find the settings button on top right of your pane.


2- click on add gadgets

3- select "versions" gadget from the list


Version history is available for you. You can see who has done changes to the selected page and if the latest changes has been published or not :)

Thursday, July 7, 2016

Umbraco: Adding Gibe Link Picker to Macro parameters



Gabe link picker is one of the coolest add-ons  available for Umbraco 7. By installing it, you will have the option to select a link from inside your Umbraco project, or write the URL your self for an external link/mailto. You can read more about downloading/ using it from here.

However, one might want to use the link picker as macro parameter. To do so,go to your solution-> app_plugins>GabeLinkPicker>package.manifest


and then add isParameterEditor: true to the property editor. It should look like this:

Now if you open a macro type(or create one) and go to parameters tab, link picker will be one of your options as the parameter type.




Tuesday, July 5, 2016

umbracoRedirect The great hidden Property


I had a scenario that I wanted to redirect a page to another one in my Umbraco solution. I needed my page to be in my structure, but I didn't want it to show anything and I didn't want its url to be shown on the address bar. Then my friend Niels Damkjær showed me a good hidden feature in Umbraco.

There is no need to write code or do something hard. You just need to create a new field in your document type with alias umbracoRedirect and the editor has to be Content Picker. Thats all you need to do :) Something like this:


Then in your page you can select the tab for redirecting which is very cool. 
There is also a post in our Umbraco in here but you don't need to do it with those steps.


* After adding the field for the first time, it did not work for me, but when I restart my webserver it started to work fine.

* It uses the http 302 header to tel the browser to redirect to another page which is the temporary redirect command. 

Thursday, June 30, 2016

Beginners guide to Umbraco 7: A guide for developers Part 1

Introduction

Some months ago, I've started to work with Umbraco. First I've tried to understand how it works and what do I need to know in order to implement good code with Umbraco, but actually it took more time than I've imagined.
The reason is Umbraco is very flexible which makes it easy to work for content managers, front-enders, and us (C# developers, back-enders, full-stack developers). You can easily implement lots of stuff inside Umbraco without needing any IDE and there is no need to implement a controller, service or anything. So if you are using it to implement a simple website that doesn't need that much code implementation or software structure, there is no need to read my post :)

As a developer why do I need Umbraco (or any CMS)?
If you want to develop some code on Umbraco, you probably have some user specific scenarios that you need to implement in your code and at the same time, you want to give some flexibility to the editors to use your structure and their creativity to make good content. For instance, recently I needed to create a rule engine in a project. It took me only a couple of hours to implement 4 different rule type, setting the consumers up and adding them to my pages. only 2 hours from A to Z :)

Where to start?
There are some videos in http://umbraco.tv/. Start from there and learn the structure of Umbraco and how it works. It is  Then come back and continue this post :)
You will learn:
*What is Umbraco's back office
*How to create content types
*How to create templates
*How to create content like pages
*How to set up surface controller ( a controller with access to Umbraco Stuff)
*How to set up a service

You will not learn:
*What is going on in Umbraco
*The project structures for different kind of projects
*Relating different stuff
*Working with Strongly Typed items! Yeap,

Beginners stuff in short
Installing
Creaete a new visual studio empty asp.net solution. Then go to  nuget package manager and add UmbracoCms.

This will add the Umbraco files. Now click F5 to run the project and then continue the wizard to create your stuff inside your DB. for the matter of your test project, you can go on with the DB file and when you want to publish your solution on test/production environment, you can easily set up the SQL server to use the DB like any other ASP.net project.

Umbraco Back office: 
The place that you can go and change stuff, create content type, create content and everything that is possible through a CMS.
You can go to your back office by adding an Umbraco to your sites url:
www.yoursitename.com/umbraco

Creating Content Type:
It is obvious from its name :). It is basically your model/class definition. There are 2 different types of content types,:

1- With  template: meaning that you want to show them to people like your pages.
2- Without template: when you don't want to show anything for that single item. like your settings, etc.

You can create content types by going to settings> content types and create your type. Just click on ... in front of the content types and select what you want.


Then add your fields to your type.  and click save.
For instance, let say that you want create a page to read my data from my text box in back office and write it to my start page. I will create a document type, and will call it something like "Test Doc Type".
Pay attention that Umbraco will change the alias of everything by removing the spaces and lower-casing the start character for the name.
Click on add property and then fill its name, then select the type of the box that you want to show to your user in back office (not the viewer!).  for instance you can select a rich text editor.
This will let the user to put some text with good rich text tools (tiny MCE to be more specific). you can select options and then save the form.

Editing the view 
Open your templates and find the template with the same name. (if you cannot see it, right click on the templates and click on reload).

This is your Razor View. You can do what ever you want in this view. You even don't need Visual studio to create/edit templates.

You have different option to working with views. One of them is working with "dynamic" data type in C#. Write this code inside your view and you will be fine:

@using Umbraco.Web 
@using Umbraco.Web.Mvc
@inherits UmbracoTemplatePage
@{
    Layout = null;
}

    < h1 >@(CurrentPage.YourFieldName != null ? CurrentPage.YourFieldName: "")< / h1 >
Easy right?
There is also one other way which is working with properties:
@using Umbraco.Web 
@using Umbraco.Web.Mvc
@inherits UmbracoViewPage
@{
    Layout = null;
}

   < h1 > @(Model.HasValue("YourFieldName") ? Model.GetPropertyValue("YourFieldName") : "")</ h1 >

Both codes work fine. and since there is no compilation, there is nothing that important to differentiate.
In later posts, I will show you how to work with strongly typed models inside Visual Studio (view/controller/services) but for now, set up your first page this way so you can see a sample page in Umbraco.

For additional data, you can take a look at this post by Dave Woestenborghs:
http://24days.in/umbraco/2015/strongly-typed-vs-dynamic-content-access/

Adding page to your site
Go to your content tab right click on "content" and click create to create a new page. Select your type and click create. Now give your page a title, and fill your field, then click save.
Now go to your base address to see your page.
Congratulations,  you've just created your first Umbraco site.


Wednesday, June 8, 2016

Relational Database Design:Simple rules for creating primary key

Introduction

Some years ago, I had some discussions with one of my friends about using defining the primary key. In the old days, as a software developer you had to implement your database first so most people though about how does the database works and how they can implement their solution in a suitable way.
In these days, developers use ORMs most of the time which is very good, but at the same time, they start to ignore how their code will implement inside the DB. So it is important to consider how your data will be translated by ORM and what will you have inside your DB afterwards.


Some Rules 
Well I think most people these rules, but there is no harm in mentioning them.

Primary keys should never change

Your RDBMS, uses keys to manage tables, sort them, find them and make relation between them and the most important key obviously is the primary key. If you ever try to change the value of the primary key, it will affect other related tables.

You cannot use a natural key or a key form other system

It is possible for natural keys to change, so you are obviously violation the first rule. You might say, oh someones ID will not change. But it is possible for the government the change the system of producing the ids then you have to change a lot of stuff.

They cannot have any formula

It is also a violation of rule 1, since you may need to change the formula in the feature.

The uniqueness has to be easy to maintain

Your RDBMS will prevent you from putting a duplicate inside your primary key, so if you are generating your key with a method that can create duplicates, you will have lots of problems that you cannot fix easily.

Use short but suitable key type

All RDBMS' use B+ threes as they index structure. They need to put your keys inside a table and fetch them so if you use a big key, your RDBMS can put less items inside a page and therefore, it has to access the disc more times  which is the bottle neck of every business application.

In SQL Server, number of index rows in each page can be calculated using this formula:
Index_Rows_Per_Page = 8096 / (Index_Row_Size + 2)

Which and size of each row equals:

Index_Row_Size = Fixed_Key_Size + Variable_Key_Size + Index_Null_Bitmap + 1 (for row header overhead of an index row) + 6 (for the child page ID pointer)

Considering the above, size of index row for int is
int (4 bytes): 4+3+1+6=14
Which means you can put 506 rows inside a page


What are choises

Considering the above, I will start with the worst one!

Never use (n)char or (n)varchar!

If you are using (n)char or (n)varchar for your key, I am almost sure that you are violating of all rules above, since no one will store a string key generated by his system :)
You will also need to worry about upper and lower case and the size of the key is also obviously big!

For a varchar(50) you have: (2+1+50)+3+6+1 = 63
Which means you have only 124 keys inside a page which is awful

Even for a varchar(20) you have: (2+1+20)+3+6+1 = 33
Which means you have only 231 keys inside a page which is still awful

And also, when you are creating your key, you will be vulnerable to concurrent requests. Like when 2 threads ask for a new key inside your application 


Less than 1% of the times use GUID

GUIDs are good data structure that can help you make sure that your key is unique. But at the same time, they are very big and they cannot be stored as a cluster index because there is no order in generating them.
The good fact about them is that they are easy to move because there would be no conflict. Also, some times, you have to create the key inside a code, then it is of course better to use a GUID to reduce the chance of generating the same key, but I would say, try avoiding them as you can.
For a GUID you have: (2+1+16)+3+6+1 = 29

Which means you have only 261 keys inside a page which is still bad


99% percent of the times use int with identity 

Int with the option of identity will help you to keep everything simple. It is very small and will take not much space. Use it with identity so you can make sure that the code will not generate a duplicate because of concurrent process. And it can also contain more than 2 billion different keys which is enough for most business systems.

Don't use small int or tiny int
Tiny int and small int are too small but their size doesn't have that much effect in comparison with integer.

For smallint (2 bytes): 2+3+1+6=12
which means you can put 578 rows inside a page which is 14% improvement but it can contain only 32,000 different values which is not that much

For tiny int(1 bytes): 1+3+1+6=11
which means you can put 622 rows inside a page which seems to be 7% improvement but since it can contain only 256 values :| you cannot use the other 366! :) 

Less than 1% of the times use Bigint
Ok! In very special projects, you might have a table that can contain more than 2 Billion rows! Like you are working for Amazon :) Then use bigint which I don't think happens for most developers in their professional life time :)

References:
Please find the formula for calculating size of rows, etc here in MSDN
And for size of variables for to this page also in MSDN

Monday, June 6, 2016

EpiServer for beginners: How to add custom fields to Order (eCommerce) programatically

Introduction

Some times ago, I had a post about adding a custom field to eCommerce from the user interface. That post got very popular and some folks like Khurram Khan and Steve C. mentioned that it is possible to add fields to eCommerce pragmatically. So in this post I will describe how to do it:

Steps
1- Create an initialization class

Create a class, inherit from IInitializableModule, add InitializableModule and ModuleDependency attribute to your class and you are almost there.

 [InitializableModule]

    [ModuleDependency(typeof(EPiServer.Commerce.Initialization.InitializationModule))]

    public class Initialization : IInitializableModule


* You will need to add "EPiServer.Commerce.Initialization" to have InitializationModule, "EPiServer.Framework" to have InitializableModuleAttribute and "Mediachase.MetaDataPlus.Configurator" to have MetaField methods but of course VS will add them for you when you write the names correctly


2-  Create the constructor and get the context
  public void Initialize(InitializationEngine context)

        {
        var mdContext = CatalogContext.MetaDataContext;

3- Use MetaField.Load to load your field from EPiServer.
        MetaField.Load(mdContext, name)


4- If the result was empty, create the field using MetaField.Create
The structure of the method is like this:

MetaField Create(MetaDataContext context, string metaNamespace, string name, string friendlyName, string description, MetaDataType dataType, int length, bool allowNulls, bool multiLanguageValue, bool allowSearch, bool isEncrypted);


* just to mention, length is the size of your data, so for example bool is 1, or DateTime is 8

5- Load the class that you want to add the field to it using  MetaClass.Load
  var mtClass = MetaClass.Load(mdContext, metaClassName);


6- Check if the meta field already exists in the meta class by checking its fields
  cls.MetaFields.Contains(field);


7- If the meta class doesn't have the class, add it using AddField
mtClass.AddField(field);


8- Smile :)

--------
It is very good idea to have simpler methods for adding fields and joining them to the meta class.

    private MetaField GetOrCreateMetaField(MetaDataContext mdContext, string metaDataNamespace, string name, MetaDataType type, int length, bool allowNulls, bool cultureSpecific)

        {

            var f = MetaField.Load(mdContext, name) ??

                    MetaField.Create(mdContext, metaDataNamespace, name, name, string.Empty, type, length, allowNulls, cultureSpecific, false, false);

            return f;

        }



        private void JoinField(MetaDataContext mdContext, MetaField field, string metaClassName)

        {

            var mtClass = MetaClass.Load(mdContext, metaClassName);



            if (MetaFieldIsNotConnected(field, mtClass ))

            {

                cls.AddField(field);

            }

        }



* And just to say: it is a good practice to have your strings inside enum classes. If you have a project, but you don't have any place for your enums and constants, you have to reconsider some of the stuff in your code :)

The whole code look like this:

 [InitializableModule]

    [ModuleDependency(typeof(EPiServer.Commerce.Initialization.InitializationModule))]

    public class Initialization : IInitializableModule

    {

  public void Initialize(InitializationEngine context)

        {



            MetaDataContext mdContext = CatalogContext.MetaDataContext;



            var myField = GetOrCreateMetaField(mdContext, Constants.Metadata.Namespace.Order,

          Extensions.PurchaseOrderExtensions.myFieldName, MetaDataType.DateTime, 8, true, false);

            JoinField(mdContext, myField, Constants.Metadata.OrderForm.ClassName);

        }



      private MetaField GetOrCreateMetaField(MetaDataContext mdContext, string metaDataNamespace, string name, MetaDataType type, int length, bool allowNulls, bool cultureSpecific)

        {

            var f = MetaField.Load(mdContext, name) ??

                    MetaField.Create(mdContext, metaDataNamespace, name, name, string.Empty, type, length, allowNulls, cultureSpecific, false, false);

            return f;

        }



        private void JoinField(MetaDataContext mdContext, MetaField field, string metaClassName)

        {

            var mtClass = MetaClass.Load(mdContext, metaClassName);



            if (MetaFieldIsNotConnected(field, mtClass ))

            {

                cls.AddField(field);

            }

        }



        private static bool MetaFieldIsNotConnected(MetaField field, MetaClass mtClass )

        {

            return mtClass != null && !cls.MetaFields.Contains(field);

        }

}

A Sample in a project
You can look at Steves  CommerceStarterKit and to be more specific, this page :) 

Acknowledgment
My gratitude to Steve Celius  for sharing his code with us.

Saturday, May 28, 2016

Programming tips: Entity Framework (any ORM) is NOT the GOD of Data! You need to know how to work with it

Introduction

In my professional life, I've worked with different ORMs and of course the best ones were Hibernate and Entity Framework. They will save almost half of the time that you had to spend for your software by handling the requests to DB so you just need to know how to work with objects instead of relational data. However, they are no gods! They will do the same thing that you are telling them, They cannot work miracles because they don't know what is your intention! They are simply some software systems!

The problem

A simple question can result to the answer. Take an individual who knows Object Oriented and give him this classes.

Code:

class A

{

 public string Status{get;set;}

 public List<B> Bs{get;set;}

}



class B

{

 public string Status{get;set;}

 public List<C> Cs{get;set;}

}



class C

{

 public string Status{get;set;}

 public string Name{get;set;}

 }



Now ask him to give you name of cs that are in status 'BLBlah' he can find in B with Status 'Blah' and are in As with Status 'BlahBlah'. You will end up with a code like this:

Code:

var results = new List<string>();

Var selectedAs = context.As.Where(a=> a.Status== "BlahBlah" );

foreach(a in selectedAs)

{

     var selectedBs = a.Bs.Where(b=>b.Status=="Blah");

     foreach(b in selectedBs)

    {

      var selectedNames = b.Cs.Where(c=>c.Status=="BLBlah").Select(c=>c.Name).ToList();

       results.AddRange(selectedNames);

     }

}

return results;



Pretty straight forward, huh? He rocks! and the ORM rocks! Right?!
Well he just added a lot of overhead to your DB!

What is wrong with this code?

The way you've asked your query! Lets say there are 100 As with wanted status and for each of them you have 100 Bs with wanted status.
You've asked your ORM to find those 100 A, then for each of them you've asked it to find Bs and then for each one find Cs.
It is completely fine if you are working with your memory, but this code means :
1 call for As + (100 calls for Bs * 100 Call for Cs * (1 projection + addrange)) = 100001

So you've sent 10,001 requests to DB for a simple 3 layer select! How fast can it be?!!!

Lets See another Code

So now you might say that it is because of the foreachs I wrote, but that is not the case.
Lets say I want to convert the structure above to this one and then use it somewhere.

Code:

Calss ConvertedA

{

 public string Status{get;set;}

 public List<string> BStatuses{get;set;}

public List<string> CNames{get;set;}
}
With simple Object Oriented view you will probably end up with this code (With no foreach):
Code:

List<ConvertedA> GetConvertedList()

{

  var As= context.As;

  return context.As.Select(a=> new ConvertedA()

  {

   Status = a.Status,

  BStatuses = a.Bs.Select(b=>b.Status).ToList()

  CNames= a.Bs.SelectMany(b=>b.Cs).Select(c=>c.Name)

  };

}


OMG! What a wonderful query! right?! So Simple! But again, you are sending lots of requests to DB! why?
1  selectAs+ 100 Select Bs+ 100Bs*100Cs = 10,101 requests!

The Solution

There are 2 solutions to this problem. One is from the view of Software Architect, and the other from the view of programmer.


The programmer

Write the best query
As Mahdi Hasheminejad in the comments, there are many cases that you can select the correct data with one query. It is pretty useful and of course is the best way to solve the problem. As he mentioned, the query can be written like this:

var results = context.As
.Where(a => a.Status == "BlahBlah").SelectMany(a => a.Bs)
.Where(b => b.Status == "Blah").SelectMany(b => b.Cs)
.Where(c => c.Status == "BLBlah");

And the result will be translated to this query:
SELECT
[Extent3].[Status] AS [Status],
[Extent3].[Name] AS [Name],
FROM [A] AS [Extent1]
INNER JOIN [B] AS [Extent2] ON ...
INNER JOIN [C] AS [Extent3] ON ...
WHERE (N'BlahBlah' = [Extent1].[Status]) AND (N'Blah' = [Extent2].[Status]) AND (N'BLBlah' = [Extent3].[Status])

Load needed data in memory
If you cannot handle your request with a good query, load your data first! A good example for this case is when you need to compare something with the result of something out of your DB, like when you need to call a service.
Like for the first example, you can say:
var Cs = context.Cs.Where(c=>c.Stauts="Blah").ToList();
Var Bs = context.Bs..Where(b=>b.Status=="Blah").ToList();

Now use these 2 lists inside your foreach and compare them with their Ids.


The Architect

ORMs simply map tables to related objects in memory. But, there are 2 ways of  working with relations. Lazy loading (which is the default in most ORMS) and Eager Loading.
Lazy loading simply means that ORM will wait for you to ask for something, and then it will load the data. For instance :
var a = context.As.First();
This will only load 1 A object from the memory and nothing more.
Now if you write:
Var bs = a.Bs.ToList()
Your ORM will send another request to fetch the Bs.
This is exactly what most codes needs. But in some cases, we know that the a is not usable without their Bs. So the architect can decide to use Eager loading for that relation. So when you say
var a = context.As.First();

Your ORM will retries your A and all Bs that are related to it.

*The Eager loading, is not a good solution for 90% of the times. It depends to nature of your data! So don't use it perfunctory.





Friday, May 27, 2016

Software Architecture Tips- Don't put logs in your production DB


Sometimes ago, I had some discussions with one of my friends regarding having the logs inside DB.
While it might be interesting for many people, it is really a bad idea.

Why people decide to have their logs on DB

First thing, first :) we have to see what is good about having your logs inside a DB. Most people claim that when you have multiple servers (like web-servers) it is hard to collect your logs and combine them in order to understand what goes wrong.
That is a true point, but there are many other ways to do that, without affecting your important resources.

What is the disadvantage?

* First, you are sacrificing the most valuable resource in your system to keep some stupid logs to use in the future! You just need to have them somewhere for future reference! That is all! No realtime access, no need to have indexing or etc.

* Secondly, you might loose your logs, when the network goes down, or the DB goes down, you will not have a clue about what had happened in your code.

* Then: You will stop logging the other important parts in your code, since you will think of the resources you are using. You will loose all Debugs and Info's because if you start logging them, your DB will struggle to return a simple select to you! So you will ignore many important logs that you will need in the future.

What is the log in the view of Software Architecture

Logs are simply some data that have to be managed separately. They are not there to meet your systems functionalities. The only reason for having them, is to help you find out what is wrong with the system and fix it ASAP.

What to do then?

First, your logging system has to be implemented in a way that you can change the behavior easily. If you want to add debug logs or remove them you have to be able to do it. The thread that is handling your logs has to be different than the ones which are handling your systems process. No critical resources has to get busy because of logging.

But it is takes a lot of time!

There are lots of different logging libraries. One of the best ones are Log4net for .net applications. It has been implemented by Apache  and it is very easy to use.
It has an xml file that you can use to say what to do with a log, and what shall be the format of the output, etc. You can also say that I want my output to be handled in several different ways, and it has a separate thread so it won't affect your system.

Where shall I put my logs then?

Of course the first place to put your logs, is on the disc. It is available all the time (if not, your server will go down so you have no logs :) ), it is not a valuable resource and it is almost free since you are on the web-server. I suggest to have a file for all of your logs and another one only for Errors and Fatals, so you can see the errors easily.

But I can't check my logs everyday specially since I have several servers

As I mentioned, it is not a problem of your system. The log manager has to handle it and you can do it easily by confining your logger. There are many solutions for collecting logs. Sentry is one of the simplest. You can tell your logger to push all errors to your sentry account and then check them in a managed third party web-application easily. There are also some other systems that check your log files and update their status based on those files.

So if you have several applications that you have to take care of, you can see all of the process in a third party application somewhere else and you've just used 1 thread of your system, your web-applications disc and a small portion of your network.


Thursday, May 26, 2016

Programming Tips: Don't mix 2 tasks together


In my professional life, I've seen lots of people with different coding styles, but one of the most important issues that most programmers have is due to the fact that they mix different tasks together.

Rule of 30

You've probably heard of rule of 30 in clean code, but did you ever thought why a simple rule like this is so important?
The reason is simple: To make you create simpler methods.
So if you have a method that will do 10 things together, you will be forced to write 10 different methods and then call them

Why some people avoid it

Well, in my opinion 30 lines is enough for implementing a singe task, most of the times. And I reserve 1% for something that is an exception, but what about the other times?!
In my opinion, most of the times it is hard for people decide about separating different parts of a big task to several smaller tasks so they try to solve the whole problem in a big and ugly method.

Example

Any normal method that you write in daily basis can be a simple example for this mater. For instance, reading data from some text and filling your tables in DB. Very easy, right?

If you don't divide your big task, you will have a big method for doing everything and then you probably try to read every line and for each line you will try to save data into your DB. Right?
It is ugly code and I didn't even started! :)
OK, hopefully you have a DB with good structure, so you need to put your stuff in tables that they have some items it-selves.
....And most of the times your text data is not normalized data ( you wish! :) )
So you will start to reading data, line by line. storing common data into parent tables the first time and then ignoring them next times.

You see where it is going! right?! UGLY!UGLY! LAKH!

What should you do?!

You have a task to read data from text and store in in DB right?! So read the data first into structured objects (like DTO) and then save them into DB in another method. You have complex objects?! easy! create a method for reading each one and call them inside the parent!


What is the benefits? 
* It is easy to understand what you did. So easy to maintain and debug
* You have separated methods based on your functionalities. So you can reuse your code. 
* You managed your situation on each method based on what it has to do. So your code handles the situation better

Friday, April 29, 2016

String or binary data would be truncated. EPiServer


Introduction
I had a stressful story 2 days ago. Well, we were at the end of a sprint and like many other projects enjoyed our peaceful time solving a few minor issues that our customer has reported when suddenly we faced the error:
String or binary data would be truncated

I know what you are thinking:
"Yup! OK, Someone used a name longer than 50 char..."

But that was not the story!

How did I check

Fortunately, we were at the end of sprint, so I could check all new properties very fast but there was nothing wrong. I've checked my DB with a query like the one that I used for finding my string in here and checked for strings with len>50. And I've shocked since there was nothing wrong.

Then I've asked my good friends in EPiServer to help meQuan Mai and Daniel Ovaska guided me how to find the cause of the problem.

Faithfully, I had a back up from the last night so I could compare it with my working copy easily using "Visual Studio SQL server database comparison tool" that I've described in here :) and found the problem.


The Problem

As Quan Mai mentioned, when you have mismatch between your code and your DB, you will have this error. Of course I hopped that the error message were more obvious but just keep that in mind :)

Anyways, I've found the problem! Yayyyyy!
We have 2 totally different branches that we are working on for some time and we have a separated DB for each of them. Then I've found the properties of the other one inside my DB....

Why?
Well if you switch your working branch, it will change the web config, but what you don't keep your dlls inside your source control, right? So keep in mind to rebuild your code before browsing your site :)

How to compare 2 Sql Server Databases with VS 2015

Introduction

There are some times that you need to check your DB, either the data or schema. Sometimes you have a back up and you want to see what has been added updated or removed from your data, like when you have an error that you cannot find the cause.
Sometimes, you want to see if someone has changed the schema and you want to know what is the change.

Solution 
It is an old problem right? But in the old days you had to pay lots of money to be able to do that.
Take a look at Red Gate SQL data compare or SQL Data Examiner . Yup you have to pay at least 300$ to do that. :|
But the good news is that your Visual Studio has the ability to Do a lot of Data and Schema comparison which is free and more importantly it is inside our great tool Visual Studio. :)

How to do it?
Inside Visual Studio 2015 (mine is professional) Go to Tools >SQL Server. you will see 2 options for comparing Data, or Schema.



Insert your connection Data, click compare and you are good to go :)


Tuesday, April 26, 2016

Agile, Why shall I spend lots of time on brainstorming, pair-working, stand-up meeting


Introduction

If you have every worked with Agile processes like SCRUM, you will see that there are lots of time for brainstorming/ stand-up meetings and pair working. Considering all the time that will be spent some will decide to ignore some of those rules and it may have dramatic effects on the quality of the solution/ total time that has been spent on the project.

General Idea
Back in the old days, there were people who analyze the needs, then some others had to design a system based on those analysis and at last there were some programmers who would implement the code. Generally speaking, the analyst was the only one who had to be the expert on the field of the project and based on his vision, the designers (some people with university degree in software engineering)  would design a model and test it based on the analysis and the programmers just have to implement the model.

What happened in practice
* It was very hard to implement. Software teams were not mature enough to implement everything as the idea said so in practice, they spent a lot of time on design and analysis, but at the end the implemented result was far from what the customer needed. (because of the gray zone/ incomplete data/ low-depth analysis/ lack of software knowledge, etc)

*Around 60% of the time had been used for creating documents that no one would read or understand in the end. Add this time to maintenance time since the documents also have to be the updated.

*There was a big gap between analysis and implementation, so when they finished the implementation the customer might not need it anymore. (We consider it as a failure)

What About Agile

With agile we took out the over head of analysis and design. But it is not magic! nor miracle!
It simply says, choose some programmers who know basics of software design and can solve problems (we call them developers), give them authority to decide what shall they do and let them talk to the customer.

What If we remove the brainstorming session?

In Agile, Brain storming session is a meeting that your team decide about the time that they will need on each task. On SCRUM for instance, there is a session on the start of each sprint which can take up to 1 day. So many people will say: OH! the whole team for 1 day waste their time to decide about the time that they need?!!! I will ask one of them who has enough experience to do this instead, so I will win lots of time!
Well, then you will end up with a team that has no vision about the whole tasks, and therefore the code quality will decrease and you might end up with spending lots of time on merging/refactoring/maintenance instead.
Worst than that, if your expert don't describe what he had in his mind, the one that has to implement it might implement the task with another way, and he is not to blame since he have a limited vision and with that vision, his solution works fine.

What if we remove the pair working
Pair working is the process of 2 people working together on the same machine. Well, one is working and the other is sitting behind his hand to prevent mistakes.

Yup, you might say that it will double the time that you need to spend on each task, so why to work as pairs?
Well, it will prevent people from implementing a very complicated code that no one would understand, the maintenance cost will decrease since 2 people checked it line by line. And you have 2 people to handle the changes because both have the knowledge of the project.

* My experience: don't work as pairs when the task is simple, otherwise you will get board and you've wasted your time.

What if we remove the stand-up meeting

Stand-up meeting is for your team to understand what is going on while helping each other. Some would say that when we have a problem, we will ask each other, we are good friends, etc. but in practice I saw these scenarios:
*The one who is drawn in the problem: Software developers like google very much. They will try to find the answer to all problems there. While it is a good way to solving your problems, it is a wast of time if you spend 1 week to solve a problem that your teammate already knew,

*The one with lack of experience: It is really hard to ask a question for people who don't have
enough experience, because for them it is like admitting that they don't know. Also, other people are mostly busy in the day so they might be interrupted when the others ask too much questions. Therefore, it will be late when you see their code and understand what a mess they have made up.

*The one who is not Taciturn: In every team, there are some people who enjoy their inside world, more than the outside world and that is why they choose this job. If there is no stand-up, no body will understand what are they doing, their achievements and their mess up things.

*The one with complicated solutions: Software developers like to write codes that can be applied to a general issue and normally it is good, but not when they change a simple subject to a twisted and a lot harder problem that is hard to read. Of course, code review is one of the ways to prevent this problem, but it is simpler this way. Like when you see someone wants to spend 2-3 days to implement a simple service wrapper.

*The one with rondevu point:
Stand-up meetings is a rondevu point. It help the team to manage their time (developers don't like to wake up early in the morning so you might see someone who wants to start his day at 1 a.m :) )
It helps people to maintain their focus since no one will ask you a question when you are deeply in your task

Conclusion 
Software development is not a concrete block that you cannot change. You need to understand pros and cons and the application on your team in order to get the best result. So if you want to do Agile, but you want to get rid of any of the above, make sure that you know the consequences.




Cherry Pick! What an amazing command


Introduction

Well, I've used cherry pick several times. It is a very useful tool that you will use it a lot when you understand how it works. 

Basically speaking, you will need it when you have 2 different branches that some people are developing concurrently and you want some of the changes to be implemented in both. Some might say that, we will copy the code. Well, then you will have merge errors, and if you are the one who is responsible for merges you will know that how painful it is when you want to understand which side to choose and how to merge, specially when it is not your code.

Using cherry pick, you are actually taking some code from a branch and applying it to yours, but at the same time, you are telling to your source control that you are applying one of its changes to your branch, so when you are going to merge 2 branches together, it will understand that you've already applied the change on your branch.

When you realize that you love it!

OK, I have 2 separate branches now and I want one of them to have all of the changes in the other branch, but I have some changes that I don't want to merge for 1 day or 2. Well, I it was the time that I realize how much I love this command. It is very fast and easy to use, specially with source tree. :)


Steps On Source-Tree
It can be done with these steps:

1- Select your branch (the one that you want to apply the change) as your current branch

2- On log/history window, there is a drop-down on top left, select "All Branches" instead of "Current Branch" so you can see all changes in all branches.

3- Select changes that you want using ctrl+left-click or Shift-left-click. then right click on selection and choose cherry-pick! 


Congratulations, you have done your first cherry pick! :) very easy! Right?!