Posts

Showing posts from 2016

Modern Day Messaging Patterns

Its been a few days now and I have been focused on understanding modern day messaging patterns for a problem I am trying to solve. I do know that there are existing server side tools like Active MQ, Rabbit MQ and even WMS that can do the trick and already have pre-defined patterns tested and validated for performance and security but in this case even though I am not trying to reinvent the wheel in terms of creating a new pattern or any of these server side products, I am definitely trying to understand the manner in which these products have been created and if I can actually leverage some of the principles in a server side application I am writing up. For example in modern based web application development, if .Net based, you have patterns like the one's defined here: Microsoft SOA patterns  that do the neat tricks you would need. Man at times I feel I am going at 300 miles an hour without any crash guards: Code reviews, Custom product development, Customer Engagements, Team man

Power BI To Embed Or Not To Embed

It is very critical for organizations to work & play with data. Power BI - the reporting solution from Microsoft is literally scorching the market with its rapid pace in usage. On a quick note while interacting with your Power BI report like the following--> This report is accessible by the public. In order to create a more personalized/advanced security reporting structure with Power BI, the Power BI embedded would be the way to go. Create a workspace collection in Azure and then generate the required API keys (two by default - primary and secondary). These API keys will be leveraged by your web application. Once this is done create the pbix solution file in your desktop tool and publish or import the pbix solution to the Azure workspace using powershell/C#/ruby/java etc... Now to interact with the pbix file in your application, you need to leverage the Power BI Embed API's. However there is another approach using Power BI API's instead of the embed API's. The em

Microsoft acquires LinkedIn

On Monday 6/13/2016, Microsoft announced its acquisition of LinkedIn. This is a major game changer in the world of IT. But before we get to some of the advantages of this acquisition, Microsoft actually was working on a LinkedIn killer on its CRM dynamics platform. The idea was to generate more footprint for its CRM solution as well as create something unique with it. This was started in early 2012 and was way before its actual acquisition of LinkedIn. Here are my thoughts into where this acquisition will lead Microsoft & LinkedIn to: Microsoft gains a huge database of professionals and organizations in various streams: This alone is the most massive gain by Microsoft. It could start targeting professionals/organizations to either move onto the Microsoft platform or join the Microsoft platform which can bolster its sales by a huge margin/ Microsoft integration of LinkedIn ads with Bing: Just imagine an organization trying to establish a marketing campaign. Now with LinkedIn a

R - Notes

Image
The following are basically my notes while studying R and is meant as a reference point for myself Just a few pointers to anyone preparing for R or studying R: Take a quick look at your statistical math basics before proceeding Before applying any formula on your base data, try to understand what the formula is and how it was derived (this will make it easier for one to understand) Use it in tangent with the Data Analysis in Excel Refer to the cheat sheets available on  https://www.rstudio.com/resources/cheatsheets/ Segregate the workbench for each module There are best practices that can be incorporated while programming in R Try and jot notes when and where one can...  Refer to existing data-sets embedded in R before jumping into a data.gov file Refer to R programs written already in Azure ML rnorm() by default has mean 0 and variance 1 head() has its own built in precision *default settings in R can be modified by the options() function example: options(d

Hadoop Installation on Win 10 OS

Image
Setting the Hadoop files prior to Spark installation on Win 10: 1. Ensure that your JAVA_HOME is properly set. A recommended approach here is to navigate to the installed Java folder in Program Files and copy the contents into a new folder you can locate easily for eg:- C:\Projects\Java . 2. Create a user variable called JAVA_HOME and enter " C:\Projects\Java " 3. Add to the path system variable the following entry: " C:\Projects\Java\Bin; " 4. Create a HADOOP_HOME variable and specify the root path that contains all the Hadoop files for eg:- " C:\Projects\Hadoop " 5. Add to the path variable the bin location for your Hadoop repository: " C:\Projects\Hadoop\bin " <Keep  track of your Hadoop installs like C:\Projects\Hadoop\2_5_0\bin> 6. Once these variables are set, open command prompt as an administrator and run the following commands to ensure that everything is set correctly: A] java B] javac C] Hadoop D] Hadoop Version 7.