Ust

Ust Oldfield's Blog

SQL PASS Summit–Day 2

Day 2, Thursday, started off with a keynote from David DeWitt on cloud data warehousing, scalable storage and scalable compute. This set my theme for the majority of the day – which turned out to be big data tech.

 

My first session was with James Rowland-Jones and Kevin Ngo on sizing Azure SQL Data Warehouse for proposals – essentially answering “how much is this going to cost me?”. There are various factors to consider, which I will blog on separately. I’ve already briefly fed back to members of the team and they’re excited to know what I learnt in more detail.

 

My second session was about best practices for Big BI which, unfortunately, ended up being a sales pitch and I came away having felt that I’ve didn’t learn anything. There’s a lot of promise for BI in the big data space, so watch this space as we explore Azure SQL Data Warehouse, Azure Data Lake (Store and Analytics), and other big data technology for BI.

 

The third session was with Michael Rys on Tuning and Optimising U-SQL Queries for Maximum Performance. It was a full on session, learnt loads and took loads of notes. I need time to digest this information as Michael covered off a very complex topic, very quickly. I will, however, be blogging on it in due course.

 

After an intense third session, I chose a less intense session for the last session of the day: a Q&A with the SQL Engineering team. This was a great opportunity to learn from other users how they’re using SQL. Most users who asked questions were wanting to know about indexing, backups and High Availability.

 

Tonight – packing, and networking before the last day of PASS tomorrow!

SQL PASS Summit–Day 1

Day 1, Wednesday, technically started on Tuesday with a newbies speed networking event in which we had to rotate through a crowd of 10 other people - introducing ourselves and asking questions about our professional lives. This was awkward to begin with but, as the evening wore on, introducing ourselves to strangers became a lot easier and more normal. We then moved on to the Welcome Reception and then a #SQLKaraoke event. Great opportunities to meet new people from different areas of the world and parts of the community.

Wednesday morning proper, began with a keynote from Joseph Sirosh. This keynote from Joseph essentially set the tone and theme for a large part of the conference sessions - Azure, Big Data and the Cortana Intelligence Suite.

The first session I attended was on Design Patterns for Azure SQL Database (for which a separate blog will be forthcoming).

The next session I attended was about incorporating Azure Data Lake Analytics into a BI environment (again, another blog is in the pipeline).

My final session of the day was Going Under the Hood with Azure Data Lake. This was the most insightful session of the day, which has subsequently sparked my brain into Data Lake mode (expect many blogs on this), and went through how Azure Data Lake works as well as how the U-SQL language works and resources are allocated.

Tonight - more networking.

So far, the community has been so welcoming and I’m very much looking forward to tomorrow where I’ll be learning about Big Data solutions and best practices. I’m also looking forward to sharing all my experiences and learning's with my colleagues and wider SQL Community.

Introduction to Data Lakes

Data Lakes are the new hot topic in the big data and BI communities. Data Lakes have been around for a few years now, but have only gained popular notice within the last year. In this blog I will take you through the concept of a Data Lake, so that you can begin your own voyage on the lakes.

What is a Data Lake?

Before we can answer this question, it's worth reflecting on a concept which most of us know and love - Data Warehouses. A Data Warehouse is a form of data architecture. The core principal of a Data Warehouse isn't the database, it's the data architecture which the database and tools implement. Conceptually, the condensed and isolated features of a Data Warehouse are around:

1.     Data acquisition

2.     Data management

3.     Data delivery / access

A Data Lake is similar to a Data Warehouse in these regards. It is an architecture. The technology which underpins a Data Lake enables the architecture of the lake to flow and develop. Conceptually, the architecture of a Data Lake wants to acquire data, it needs careful, yet agile management, and the results of any exploration of the data should be made accessible. The two architectures can be used together, but conceptually the similarities end here.

 

Conceptually, Data Lakes and Data Warehouses are broadly similar yet the approaches are vastly different. So let's leave Data Warehousing here and dive deeper into Data Lakes.

 

Fundamentally, a Data Lake is just not a repository. It is a series of containers which capture, manage and explore any form of raw data at scale, enabled by low cost technologies, from which multiple downstream applications can access valuable insight which was previously inaccessible.

 

How Do Data Lakes Work?

 

Conceptually, a Data Lake is similar to a real lake - water flows in, fills up the reservoir and flows out again. The incoming flow represents multiple raw data formats, ranging from emails, sensor data, spreadsheets, relational data, social media content, etc. The reservoir represents the store of the raw data, where analytics can be run on all or some of the data. The outflow is the analysed data, which is made accessible to users.

 

To break it down, most Data Lake architectures come as two parts. Firstly, there is a large distributed storage engine with very few rules/limitations. This provides a repository for data of any size and shape. It can hold a mixture of relational data structures, semi-structured flat files and completely unstructured data dumps. The fundamental point is that it can store any type of data you may need to analyse. The data is spread across a distributed array of cheap storage that can be accessed independently.

 

There is then a scalable compute layer, designed to take a traditional SQL-style query and break it into small parts that can then be run massively in parallel because of the distributed nature of the disks.

 

In essence – we are overcoming the limitations of traditional querying by:

·       Separating compute so it can scale independently

·       Parallelizing storage to reduce impact of I/O bottlenecks

 

 

There are various technologies and design patterns which form the basis of Data Lakes. In terms of technologies these include:

·        Azure Data Lake

·        Cassandra

·        Hadoop

·        S3

·        Teradata

With regards to design patterns, these will be explored in due course. However, before we get there, there are some challenges which you must be made aware of. These challenges are:

1.     Data dumping - It's very easy to treat a data lake as a dumping ground for anything and everything. This will essentially create a data swamp, which no one will want to go into.

2.     Data drowning - the volume of the data could be massive and the velocity very fast. There is a real risk of drowning by not fully knowing what data you have in your lake.

These challenges require good design and governance, which will be covered off in the near future.

 

Hopefully this has given you a brief, yet comprehensive high-level overview of what data lakes are. We will be focusing on Azure Data Lake, which is a management implementation of the Hadoop architectures. Further reading on Azure Data Lake can be found below.

 

Further Reading

 

In order to know more about Data Lakes the following resources are invaluable.

Getting Started With Azure Data Lake Store

Getting Started With Azure Data Lake Analytics and U-SQL

Azure Data Lake Overview

 

 

R - What Is It?

I have blogged often on the subject of R, but have not previously addressed what R is and why you should use it. In the blog post, I will set out what R is, why you should use R and how you can learn R.

What Is R?

R is a powerful tool for statistical programming statistics and graphics. There are lots of software available which can do all of these things: spreadsheet applications like Excel; point-and-click applications like SPSS; data mining applications like SSAS; and so on. But what sets R apart from applications like those listed?

R is a free and open source application. Because it is free you don't have to worry about subscription fees, usage caps or licence managers. Just as importantly, R is open. You can inspect the source code and tinker with it as much as you want.

Leading academics and researchers use R to develop latest methods in statistics, machine learning and predictive modelling. These methods are stored in packages which can be accessed by anyone for free! There are thousands of packages available to download and use.

R is an interactive language. In R you do analysis by writing functions and scripts, not by pointing and clicking. As an interactive language (as opposed to a data-in-data-out black box), R promotes experimentation and exploration, which improves data analysis and sometimes leads to discoveries that would not have been made otherwise. Scripts document all your work, from data access to reporting, which can be re-run at any time.This makes it easier to update results when the data changes. Scripts also make it easy to automate a sequence of taks that can be integrated into other processes, such as an ETL.

One of the design principles of R was that visualtision of data through charts and graphs is an essential part of data analysis. As a result, it has excellent tools for creating graphics, from staples like bar charts to brand new graphics of your own design.

With R you are not restricted to choosing a rigid set of routines and procedures. YOu can use code and packages contributed by others in the community, or extend R with your own functions and packages. R is also excellent for mash-ups with other applications. For example you can build it into your SSIS routine, or take advantage of the new integration in SQL Server 2016.

We've covered, briefly, what R is. But why do you want to use it?

Why Use R?

There is a vibrant community built around R. With thousands of contributors and millions of users around the worldif you have a question about R chances are someone has answered it, or can answer it.

It is quickly becoming an integral part of the Microsoft BI stack. Since Microsoft's acquisition of Revolution Analytics, R has been featuring in the more recent releases in the Microsoft BI world. From Power BI to SQL Server, Visual Studio to Azure ML; R is becoming an integral component of the BI stack.

Where can you learn R?

There are various books and online courses on R which can be used to quickly skill you up in this powerful language. These are some which I recommend:

EdX: Introduction to R for Data Science

Book: R Cookbook by Paul Teetor

Book: R Programming for Data Science by Roger Peng

Here, at Adatis, we run internal R training courses which mean that all of our employees have the opportunity to learn from internal subject matter experts and improve their knowledge and skills.

You can also try out the R tutor from Revolution Analytics, which is a package built for R.









Deploying a Hybrid Cloud

Operations Management Suite (OMS) is an Azure based tool that helps manage your entire IT infrastructure, whether on premise or in the cloud. OMS allows you to monitor the machines you have in your infrastructure and provides a bridge for a hybrid cloud solution, by moving multi-tier workloads into Azure, or run tests on a copy of production workloads in Azure, as well as storing critical data in Azure.

Configuring and deploying OMS is very quick and relatively straightforward. This post will deal with creating a hybrid cloud solution using OMS and configuring the solution so you can make best use of resources.

Within the Azure management portal you will have to create a new Operational Insights workspace as detailed below:


Once the Operational Insights has been created, you can navigate to it in the Azure Portal and click “Manage” which will bring up the OMS itself. The start page should look like this:


And you want to click on the “Get Started” button to begin creating your hybrid cloud solution. You then want to add various solutions to the suite. For this demonstration we’re going to accept the default and have all the solutions.

 

The next step is to connect a data source, which can be an on premise machine, a virtual machine or an Azure data storage. For this demonstration we are only interested in connecting to an on premise machine and a VM.