Blogs & Articles Fri, 27 Apr 2018 21:28:50 +0000 en-CA hourly 1 https://wordpress.org/?v=4.8.6 CAMSS Canada West /camss-canada-west/ /camss-canada-west/#respond Tue, 01 Aug 2017 15:59:51 +0000 /?p=1494 CAMSS CANADA WEST 2017

Groundswell is proud to sponsor CAMSS Canada West, Canada’s premier event for IT and Line of Business leaders.

Created in collaboration with the CAMSS Executive Council, CAMSS Canada has been designed to provide the optimum learning and networking environment for senior executives across IT, Security, Governance, Data, Analytics, Digital, Marketing and Customer Experience to gather and share best practices on digital innovation and business strategy. CAMSS Canada represents the highest quality, most valuable environment for both IT and Line of Business executives from across the country focused on enhancing business strategies and innovation of Cloud, Analytics, Mobile, Security and Social technologies.
It is, quite simply, the ‘must attend’ event for leaders across the entire Canadian business spectrum.
For more information please reach out to Groundswell’s Events Team directly.

]]>
/camss-canada-west/feed/ 0
Automating AWS Infrastructure Setup with Terraform /automating-aws-infrastructure-setup-terraform/ /automating-aws-infrastructure-setup-terraform/#respond Tue, 04 Jul 2017 14:18:36 +0000 /?p=1470 by Chandrakant Rai

July 4, 2017

In this blog post we will see how to setup a simple AWS infrastructure using Terraform. Any company moving their infrastructure to a Cloud provider, would like to setup their infrastructure as code which would give them immutable infrastructure and allow them to use deployment strategies like Canary Deployment or Blue-Green Deployment which are more apt strategies for deployment on Cloud. Terraform is a tool by HashiCorp which helps in achieving this goal. The same setup could be automated using Python & native AWS Boto library but Terraform has no cloud provider vendor lock in and could be easily used for different cloud provider like Azure, Google Cloud, Rackspace etc if any organization has multi-cloud provider setup.

This is a very simple AWS setup in which we will

  • Spin up AWS EC2 instances
  • Create Buckets in AWS S3 service and use this S3 bucket for shared terraform state storage

Pre-requisites

  1. Ensure you have an AWS account setup
  2. Ensure that you have an access key and secret key
  3.  

Terraform Installation

Terraform is distributed as a zip archive that supports multiple platforms and architectures. We will be installing the Windows 64-bit package and unpack it under C:\terraform folder. Ensure the single “terraform” executable is present in the folder. Add the folder to PATH variable and verify the installation by opening a new terminal session and typing “terraform”wp 1

 

 

 

 

 

 

 

 

 

 

 
 

AWS Setup

For the pre-requisite setup, we have created a user “aws_terraform” with admin privileges and got the access key id and secret key for that user.

As first step create aws.tf file (Terraform resource template file) under “C:\terraform\AWS” folder with following details. The ami we used below is for Red Hat Enterprise Linux 7.3 provided on AWS marketplace and the instance type we are going to spin up is a t2.micro instance. For the access_key and secret_key use the values of the user created above.

wp2

 

 

 

 

 

 

 

Validate the terraform file by running “terraform plan” command. Once we are satisfied with the output we can use “terraform apply” command to create the aws resource and validate in aws dashboard that ec2 instance of t2.micro using RHEL ami image have been created.

wp3

 

 

 

 

 

 

 

 

 

 

 

 

 

Output of “terraform apply” command:

wp4

 

 

 

 

 

 

 

 

 

 

 

 

Also validate in AWS EC2 dashboard that the instance has been create and it is up and running.

wp5

To add a name to your instance, modify your aws.tf file and add tag to EC2 instance and run the “terraform apply” command. Note that the EC2 instance now have name added.

wp6

 

 

 

 

 

 

 

 

 

wp7

If you are following Canary deployment practice in your Cloud setup, you could easily spin up new infrastructure and application stack from the terraform module and redirect traffic to new infrastructure and application stack setup via AWS LB and destroy the old setup by running “terraform destroy” command. The old instance would be in terminated state after running the destroy command. For example, below we spin up new instance version 2.0 and destroyed old instance.

wp8

If you note the output of the “terraform apply” command above, it says “The state of your infrastructure has been saved.” It generates a “terraform.tfstate” which store the state/record of our AWS infrastructure. It is a JSON format file with record of our current infrastructure created via Terraform. If it is a demo terraform project storing this state file on our local drive is fine, but if you are going to use Terraform for a real setup then state files should be stored on shared storage, the recommended shared storage is AWS S3. Below we will show how to configure AWS S3 bucket and which can be used to store Terraform state files (This remote state storage feature is part of Terraform Enterprise).

To create S3 bucket we have modified aws.tf file to add a “aws_s3_bucket” resource and run the same “terraform plan” and “terraform apply” command. Modified aws.tf file is shown below. Once the S3 bucket is created we can configure terraform (only if enterprise version is used) to use it as remote storage for its state files. We have also enabled versioning on the S3 bucket, as shown below in the aws.tf file (Note: S3 bucket names have to be unique and follow the AWS convention).

wp9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Validate in AWS console that the bucket has been created. Once validate we can configure terraform to use it to store state files. We will show the command which can be used to configure terraform (But can’t show the result as we are currently using community version of Terraform)

 

wp10

To configure this bucket to be used as remote state storage for Terraform Enterprise, run the command below and state files will show up under that S3 bucket as shown below (In our case we have manually updated the state file to S3 bucket).

wp11

 

 

 

 

 

wp12

This was a short introduction to terraform and how to use it to automate aws infrastructure setup, if you would like to read up more about this nifty tool, gruntwork blog (mentioned in references section) has detailed articles written by Yevgeniy Brikman who is the author of the book “Terraform: Up & Running”, also the terraform documentation has detailed documentation for each Cloud provider and all the parameters which could be used in terraform resource file.
 

References

https://www.terraform.io/docs/

https://blog.gruntwork.io

]]>
/automating-aws-infrastructure-setup-terraform/feed/ 0
Running Docker Container /running-docker-container/ /running-docker-container/#respond Tue, 13 Jun 2017 20:25:37 +0000 /?p=1432 by Chandrakant Rai

June 20, 2017

In this blog series we are going to explore how to run Docker containers on different container orchestration service like Kubernetes, Docker Swarm etc. We will also explore the container service by different cloud providers like AWS ECS, Azure Container Service, and Google Container Engine.

This article does not give any best practice to run Docker Swarm or Kubernetes in Production environments, it is just a simple orchestration setup for our internal DevOps practice.

Initially we will explore setting up a simple NGINX container and orchestrate it using Docker swarm mode.

Pre-requisite: A minimum of 2 VM instances on Google cloud platform with Docker CE version 1.12 or above installed

Docker Overview

Docker is a platform that enables users to build, package, ship and run applications. Docker users package their applications into a Docker image, these images are portable artifacts that can be distributed across Linux environments. It is based on Linux LXC and the main secret sauce behind Docker is Namespace, Cgroups and Union Filesystem. Namespace & Cgroups provide isolation to the container environment. Some of the Namespaces docker uses are Pid, Net, IPC, Mnt, Uts and Cgroups limit resources like Memory, CPU etc. Docker containers are lightweight and portable and enables consistent environments or immutable infrastructure hence solving the common problem of mismatch between dev, test and prod environments.

New release of Docker (from 1.12 onwards) has introduced native orchestration and cluster management capabilities which can help scale up or scale down your infrastructure based on the capacity need of the application. For example, an ecommerce site during peak holiday season need to scale up their infrastructure rapidly to meet the demand and scale down during off-peak season.

Docker Installation

We will install latest stable version of Docker CE (Community Edition) on CentOS 7 VM on Google Cloud Platform. We will follow the recommended approach of setting up the Docker Repository and installing from it.

To setup the Docker repo for CE the following steps need to be followed:

Install yum-utils.

 

pic 1

 

Use the following command to setup the stable repo.

 

pic 2

 

 

Update yum package index

 

pic3

 

 

Install latest version of docker

 

pic 4

 

 

Start Docker and verify that Docker is installed correctly by running the hello-world image.

 

pic 5

 

pic6

 

 

Repeat the above steps on the second VM instance.

Create a Swarm Cluster

 Initialize the swarm on the first VM using the command below. This VM is added to the Swarm cluster as manager node. All administrative commands can be executed only on the manager node.

 

pic7

 

 

pic8

 

 

 

 

 

Copy the swarm join command from the output above and run it on worker node. Currently there is only one node in the swarm cluster, which can be confirmed by running the following command on manager node.

 

pic9

 

 

 

pic10

 

pic11

 

 

 

Creating Services and Scale Up/Down the service

Next we will be creating a simple service from NGINX container and show how to scale the service up or down based on the load. This is example of not a real world type micro-service application, but it could be a simple static web site service in the overall ecommerce micro-service application. In future blog posts we will cover how to create an app stack and install the whole app stack on swarm cluster.

The steps to create a service and scale it up or down are shown below.

Run the following command to create a docker service from nginx container.

 

pic12

 

 

 

pic13

 

 

 

You can confirm if nginx service got installed by going to the url http://ip_of_vm:80 or curl http://ip_vm1:80

Currently only one service of nginx is running, which can be confirmed by running the “docker service ps” command

 

pic14

 

 

Now let’s scale the service by running the following command and docker will spread the services evenly on all nodes of the swarm cluster. You can confirm that the service was started on node 2 by looking at the output of the command below. In real application both these nodes of swarm cluster would be behind the load balancer.

 

pic15

 

 

 

 

 

In case of less load and you would like to scale down your service and run the application on reduced number of cloud instances, you can drain one of the nodes of swarm cluster and scale down the service using the following commands.

 

pic16

 

 

 

 

After draining the instance-2, scale down the web service to 3 and note that docker spreads the 3 services on instance-1 only.

 

pic17

 

 

 

 

 

 

This was short introduction to container orchestration using Docker Swarm, in future blog posts we will delve into how an application stack built on micro-service architecture (using the sample voting app from Docker site) can be composed as docker stack and orchestrated on swarm cluster. So stay tuned for future blog posts. Also we will be delving into how Oracle Service Bus stack can be orchestrated on Docker Swarm. Also you can check out the blog post of Oracle who are also tinkering with running Oracle Service Bus using Docker Swarm.

 References:

https://docs.docker.com/engine/getstarted-voting-app/create-swarm/#initialize-the-swarm

https://blogs.oracle.com/reynolds/entry/building_an_fmw_cluster_using

 

]]>
/running-docker-container/feed/ 0
IT Knowledge Management, The Key to Effective IT Support Operations /knowledge-management-key-effective-support-operations/ /knowledge-management-key-effective-support-operations/#respond Tue, 06 Jun 2017 19:40:00 +0000 /?p=1421 by Mohamed Aly
June 6, 2017

Today’s business solutions have become increasingly powered by sophisticated IT services, in a way that makes it impossible to think that there can be any IT operation failure, big or small, that cannot dent the business bottom-line. The successful delivery of IT operations services relies on the availability of quick, reliable and ready-to-consume information enabling the IT service provider to take the right actions. These actions often need to be based on the intimate knowledge of the technology, environment, processes, and business context of the IT solutions supported. A typical enterprise IT operations support involves multiple teams where more than one professional collaborate to provide the support that spans multiple tiers of the IT technology stack.

Even within the same team, Individual professionals vary on how they acquire and maintain their knowledge. If the organization does not employ effective knowledge sharing and management techniques, they can be vulnerable to operations consistency issues, and risk having the few go-to experts and knowledge pockets that leave them vulnerable when not there.

The following chart highlights how the seamless flow of knowledge across the organization can address many of these issues.

Blog pic 1

IT operations organizations can tackle these challenges by establishing processes for knowledge retention, and fostering the culture for knowledge sharing across the enterprise. Maintaining the availability of relevant, current, validated and reliable knowledge will support all operations activities, and empowers decision making by ensuring knowledge availability for the right people at the right time.

Why is IT operations knowledge maintenance so unique?
The information technology industry evolves at a much faster pace than any other industry, the amount of knowledge that comes available, gets used, then gets outdated every year is huge. IT Knowledge encompasses a variety of technical documentation, operational manuals, business requirements, architecture blueprints and design artifacts, and IT service delivery process documentations.

According to PayScale’s recent employee turnover research, the employee turnover rate in the IT industry is one of highest among all professional industries surveyed. That poses a challenge to organizations, of how much skills and knowledge investments are leaking out their doors. Thus maintaining knowledge in IT operations proves to be more critical than maintaining the knowledge in other industries.

Another challenge in the IT operations world is the amount of knowledge and documentation produced by IT projects during their term spans, and how much of this knowledge is lost as the IT deliverables make their way to the operational part of their lifecycle. Maintaining the knowledge during the project course is far less demanding than during the ongoing IT operation services, where the knowledge is expected to be maintained and shared indefinitely, continually evolves, and needs to survive staff turnover, or service provider transitions.

Scope and requirements of IT knowledge management
IT operations teams need to access many types of knowledge artifacts to perform their support and maintenance work. To list some of these artifacts; there is procedural knowledge for different ITSM (IT Service Management) activities, the technical knowledge about the platform operations and solutions, the business context knowledge, and multitude of maintenance records and support tickets. Organizations need to have clarity of all these aspects of knowledge that need to be maintained, the scope and depth of knowledge required, to ensure that the knowledge management process and any related procedures can meet these set objectives.

The scope of service Knowledge Management encompasses the successful management of data, information and knowledge to satisfy the requirement for retained and up-to-date knowledge that is available as needed, in conformance to the organization’s information policies.

The effective management of knowledge will enable the agility of IT operations by guaranteeing that knowledge is accessible, trustable, and well organized.
Regular reviews and validation of knowledge artifacts, along with the policies that govern maintenance of the knowledge is critical for these knowledge artifacts staying relevant. These policies should not only include the authoring, review and circulation and archival of these knowledge artifacts, but also the processes to dispose of them at the end of their lifecycle.

IT operation knowledge is not limited to the typical design, build and maintain documentation, but can also include team experiences, ideas, insights, and judgments, as these are more dynamic knowledge artifacts that are very effective in sharing and spreading the cognizance across the team.

Examples of IT operations knowledge artifacts
The complexity of IT operations knowledge is inherited from the wide variety of knowledge sources and artifacts, that the operation team need to tune in to. Some of the information required to deliver IT support operations are:

  • Inventory of IT assets and services, and their lineage to business services and underlying infrastructure mapping
  • Code repositories that are integral part of code maintenance, versioning and releasing
  • Data level documentation for enterprise data models, schemas, data flows and data lineage
  • Process and procedure level documentation of all service areas such as incident and problem management, access management, release management and monitoring
  • Change Information records that typically include scope and timeline for the change, plan and implementation steps; as well as validation and approval information
  • IT operations service activity records including information about incidents, changes, releases and deployments, and related IT requests
  • IT operation risks and mitigation plans including back up & recovery, disaster recovery and IT service continuity documentation
  • Operational manuals for IT infrastructure assets, enterprise solution support documentation
  • Team onboarding manuals with clearly defined roles and responsibilities alongside interfacing processes with other teams
  • IT solution business context documentation, including requirements or use case documentation, business criticality and underlying data models
  • IT service coverage, expectations and related SLAs and KPIs, service performance reports, service assessment reports
  • IT service governance or steering committee directives and decision making records
  • IT skills accounting and skill mapping to service requirements, including training and job shadowing plans

Disciplined approach for IT operations knowledge management
While many would consider knowledge management and sharing an organic activity that teams will come to do well without much structure, there are many practices for both the execution and governance of knowledge management that, if in place, will reinforce the high-maturity of the IT operations.
Here are the key questions and activities that need to be addressed in any organization with a mature IT knowledge management practices:

Blog pic 2

1. Where do I get IT knowledge from?
By identifying the sources of knowledge in the organization, and where to look for each type of knowledge contents, faster access to creditable knowledge would be enabled. Organizations need to pay attention to maintaining the intellectual property rights and copyright requirements for these knowledge sources.

2. What are my organization’s IT knowledge requirements?
Policies need to be well defined for information protection, privacy, security and ownership. Validation criteria for knowledge also need to be in place for the understandability, relevance, importance, integrity, accuracy, confidentiality and reliability of information collected.
IT operations knowledge will also need to address operational or procedural knowledge, tactical and process-level knowledge, and strategic or policy-level knowledge requirements

3. Classify, contextualize and organize knowledge
Without the right activities to put information into the proper context, this information becomes merely disparate documents, data or siloed pockets of knowledge. The information needs to be organized in a model that facilitates the contribution, use, and sharing of this information.

Blog pic 3

 

IT organization-wide schemas need to be adopted to collect, organize and disseminate both structured knowledge and unstructured, or so called, expert knowledge.

4. Institutionalization of knowledge maintenance practices
Information, documentation, and knowledge artifacts need a clear well-defined ownership model and procedures to maintain current. Governance structures should be in-place to enforce the information policies set by the organization. These policies should also cover the security and recovery of the knowledge repositories.

Blog pic 4

5. On-going transfer of knowledge
Transfer of knowledge is a key part of doing business in IT operations teams. Knowledge needs to be transferred not only within a team, but also across teams within the same IT organization. In addition to the need-basis transfer of knowledge, less formal forms of knowledge transfer can be employed. Some of these techniques can leverage pals and role mentors to transfer knowledge between experts and learners, coordinate scripted interview-based sessions, or foster gathering for community of practitioners.

blog pic 5

6. The full-lifecycle of knowledge artifacts
Having outdated or irrelevant knowledge can hinder the organization’s efforts for effective management of knowledge. No one wants to see decisions made based on an old document that does not reflect the reality.
Periodical evaluation of knowledge artifacts is required to apply the appropriate knowledge retirement of obsolete knowledge artifacts. The rule of thumb is that every document in the organization should be reviewed at least once a year, to validate and update as needed.

7. Organizational culture of knowledge sharing
The successful operations of IT assets hinges on the effectiveness of knowledge the operations team have. The IT leadership can instill and foster a culture of knowledge sharing by
communicating the value of knowledge management, supplying the environment, and tools. In addition, knowledge management practices should be embedded in all other operational processes, so practitioners can review and update impacted knowledge artifacts.

It pays to invest in managing IT operations knowledge
When IT teams, whether in the project or service realms, invest into building a successful knowledge management practice, the fruits of having such mature practices are many. The relationship between business and IT can significantly be improved when business has the confidence that IT knowledge sharing will result in consistent access to skills and service. The turn-around time in investigation, troubleshooting and technical decision making gets much shorter, as access to knowledge is far more effective, which in turn means less down time for business solutions. And finally, the continuity and stability of IT operation delivery can be improved, with less reliance on individual team member’s knowledge and skills, and compartmentalized knowledge, and more streamlines access to well-spread IT operations information.

The journey toward achieving effective knowledge management in the IT operations world can be long and daunting. Organizations do not have to walk that road alone. Seeking a partner specialized in IT service consulting, will help organizations leverage the cumulative experience and methodologies from the industry, apply best practices, and avoid many traps along the way.

]]>
/knowledge-management-key-effective-support-operations/feed/ 0
Company Culture, Maintaining What Matters /emotional-intelligence-important/ /emotional-intelligence-important/#respond Tue, 30 May 2017 16:49:04 +0000 /?p=1415 by Amanda Mallmes

May 30, 2017

Company culture is a well-known buzz phrase, especially in the age of Emotional Intelligence, but why is it important and what does it mean?

In the workplace today company culture is very important for both the employee and employer. A great candidate may be won or lost based on the ‘feel’ of the company. How the company operates influences the culture, and the culture determines how the company operates or; how a company does business. The ‘How’ of how a company does business impacts the profits it generates (or doesn’t) and influences its overall success, in more ways than just dollars, which is why it’s so important. So how does a small business maintain the culture as it grows?

Company culture is something that is developed and continually changes. When a business is first starting out, typically the owner(s) will be responsible for (nearly) everything themselves, or they directly supervise the employees that do the work. The direct interactions employees have with the owner(s) helps to first develop, and then maintain, the company culture.

As the company grows and managers are hired to help lead teams, the amount of direct contact with the owner(s) tends to go down and there’s the opportunity for the culture to change, and when the company grows to the stage where managers are involved with the hiring process, the opportunity for the culture to change increases further.

At each stage of growth, the risk for change to company culture increases. That is, it increases unless it’s addressed. By making the core values of a company’s culture; the attitude, aptitude, and character part of the hiring process, a required skill-set, a company can continue to grow and be successful in maintaining the culture. This is because although culture is continually changing, it can be developed/ re-enforced. When managers with the right attitude, aptitude, and character are hired, start leading teams and are a part of the hiring process, they act as an extension of the owner(s) and are assessing the attitude, aptitude, and character of the interviewee for how well they ‘fit’ the company. The company is growing, the culture is changing, but the core values of the company, its essence, will be maintained.

]]>
/emotional-intelligence-important/feed/ 0
Three Data Strategy Oversights – Conclusion /three-data-strategy-oversights-conclusion/ /three-data-strategy-oversights-conclusion/#respond Tue, 11 Apr 2017 20:53:45 +0000 /?p=1407 by Ziko Rajabali

April 11, 2017

A bridge is only worth crossing if it can hold strong while we cross. When we’re crossing the chasm on a bridge of data, it needs to have integrity. Last week, we discussed chilling the data: why it’s important to put operational data into long-term storage. Unfortunately, teams often pre-aggregate the data when chilling it, under the impression that storage is expensive or that the visualization will always be faster. But what about the questions we don’t know to ask yet? What happens when we don’t know what we don’t know?

Avoid Destruction of Datum Integrity

A single drop of water on its own is inconsequential in the middle of a torrential rainstorm, yet as an aggregate of millions of drops they are of serious consequence indeed. In the same way, when using cold data to address a question, each solitary datum is of negligible value; it’s the aggregate of large datasets that leads to actionable insights. However, because the datum is aggregated into data, any mutation of the datum sets will have amplified effect on the aggregate data. Thus, even though each datum is of negligible value, the integrity of each datum is highly valuable.

The most obvious example of datum integrity destruction is scrubbing. When a human touches data before it enters the analytics system, aside from the aspects of the data that are targeted for scrubbing, there is an introduction of bias. Agency theory kicks in and small adjustments are made to the data to make it appear favourable (in whatever way the individual touching the data perceives favour). The organization’s culture will also seep its way into the data through Groupthink. It’s possible that cognitive dissonance causes an individual to adjust data when personal values are at odds with corporate value, and the list of possible biases goes on. When these biases are consistently introduced into the data, the aggregation of the data will amplify them, and can result in overtly poor decisions.

Datum integrity can also be destroyed when dealing with unstructured data like emails. This data needs to be tagged with metadata to give valuable search and analytical results. When large swaths of unstructured data gets arbitrarily dumped into a big data storage system without having metadata attached to it, then optimistic algorithms end up including data that is irrelevant (false positive) and pessimistic algorithms end up ignoring data that is actually relevant (false negative). In this situation, the best outcome is that the untagged data is unused and simply occupies space, while the worst outcome is that the untagged data wrongfully skews algorithm and analysis results. When a company first begins collecting unstructured data, it is unreasonable to expect that all use cases will be known in advance.  Nevertheless, putting some consideration into how data should be tagged will produce results far more trustworthy than arbitrarily dumped data.

Arguably, pre-aggregation is the worst and most common culprit of datum integrity destruction. Using the argument that each datum is of negligible value on its own, some businesses store pre-aggregated data to help solve challenges relevant to the present. However, by storing the pre-aggregated data, the data cannot be analyzed by evaluating different aggregations and it becomes impossible to drill-down to lower levels of granularity. Let us use an example based on public transit. In an attempt to determine stop utilization by passengers, the operating company has stored the following information for analytics. Notice that we are provided with an aggregate over the month: the number of times the bus stopped at a designated area to pick up passengers. Of interest, stop 2 appears to have been frequently skipped, suggesting low utilization.

Route Number Stop Number Month # Times Stopped
301 1 Sep 25
301 2 Sep 20
301 3 Sep 23

The challenge is that we don’t have a way to evaluate the data along other dimensions. Perhaps if the raw data were explored based on route + stop + driver, a pattern emerges where the regular driver was sick on the days that stop 2 was skipped (implying a lack of training or awareness). Perhaps if the data was explored in conjunction with weather, a different pattern emerges showing that only on the rainy days was ridership low at stop 2 (possibly because stop 2 was not covered). The bottom line is that by pre-aggregating the data before storing it for analysis, this fictional transit operator has severely cut the value of its data. In reality, this problem occurs because our present selves don’t know what problems our future selves are trying to solve, so by storing a currently relevant perspective of the data instead of the raw data, we do ourselves a disservice.

Conclusion

Make sure there is a bridge to cross when you get there! Treat data as a long-term resource and use it to build that bridge. Start with the data traditionally seen as valuable from ERPs and financial systems, but don’t overlook uncommon sources of data. Actively brainstorm external and unstructured data sets like email and excel reports, and collect and chill all of the raw data generated at the lowest levels of operations. Capture data in its rawest form possible and avoid de-valuing the integrity of each piece of datum. Doing this will yield insight into resourcing, operations, market considerations and competitive advantages, especially if enough data has been collected to be statistically significant over the course of prosperous seasons and equally difficult ones. Bear in mind that no answer will be automatic or self-evident, especially if the question itself is unclear as exaggerated by the late Douglas Adams. However, when used as a tool in the arsenal of strategy, data can be a critical success factor to objectively guide, support and validate all corporate decisions.

If your team is seeking to better understand the data landscape, consider attending Big Data: A Peek Under the Hood in Calgary on May 4, 2017 and in Vancouver on May 11, 2017.

 

]]>
/three-data-strategy-oversights-conclusion/feed/ 0
Three Data Strategy Oversights – Part Two /three-data-strategy-oversights-part-two/ /three-data-strategy-oversights-part-two/#respond Tue, 04 Apr 2017 17:02:40 +0000 /?p=1400 by Ziko Rajabali

April 4, 2017

We’ll cross that bridge when we get there only works if that bridge already exists. When that bridge is a strategic decision, it is a bridge of data. Last week, we talked about unconventional sources of data, like emails. In a way, it was like unearthing a new raw material for the bridge. But what about the refinement of known raw materials? How do we maximize the value of data that comes from conventional sources?

In part 1 of this series, we mentioned how sensor measurements are often overlooked in data strategy. Sensors are often used in a closed-loop automated system, where the hot data is immediately used by the embedded system as input into an algorithm that maintains optimal running levels. Once the measurement has been taken and applied by the control system, that data point is often discarded.

For example, the readings from a pressure sensor inside a chamber might be used by the injection nozzle to control the flow of gas into the chamber to stay within a specific threshold. This measurement and resulting control operation is real-time, usually measured in milliseconds. But instead of discarding this data point, the system could pass it on to some form of storage like a log file, a database or even a local file. On a regular schedule, the storage can be transmitted to an enterprise storage system like a cloud drive. This data is now cold – each individual data point is useless to the system that was using the hot data, but as an aggregation the data can be very helpful in observing patterns in long-term maintenance of the system and appropriately weighting the value of that one component in the overarching system.

It’s plausible that, in this example, the container is itself a $200 item and a failure is engineered to have a minimal impact on the rest of the system. In this case, an effective mode of managing the container might be to let it fail, replace it, and carry on. However, in this hypothetical scenario the technicians replacing the container always happen to bump into a pipe that feeds into a $20,000 component. This loosens the pipe and every third $200 container failure leads to a $20,000 failure. Without knowing what the technician is doing, an Engineer analyzing the $20,000 failure would conclude that the pipe fitting needs to be rejigged. However, if the cold data is analyzed for each failure, a trend might be observed where three $200 replacements lead to a whopping $20,000 cost, leading to a change in the maintenance strategy of the $200 container. This example is simplistic with only one degree of separation; in reality, the causal event chain is not so straightforward. It’s also important to remember that not all correlations are causal, but the data does offer objective attack vectors for a root cause analysis.

Be sure to properly value the raw data generated by sensors or the closest equivalent in your company’s operations. Be careful not to discard data without weighing out the pros and cons of chilling the data for long-term storage. Advancements in the areas of Big Data, IIoT (Industrial Internet of Things) and cloud technologies have significantly reduced the barriers for data strategies to capture and chill the output from systems that generate massive amounts of hot data.

Click here for the conclusion!

]]>
/three-data-strategy-oversights-part-two/feed/ 0
Three Data Strategy Oversights – Part One /three-data-strategy-oversights/ /three-data-strategy-oversights/#respond Tue, 28 Mar 2017 19:54:37 +0000 /?p=1390 by Ziko Rajabali

March 28, 2017

In the face of day-to-day operational concerns, it’s easy to adopt the mantra we’ll cross that bridge when we get there. There is wisdom to this philosophy: given limited resources, activities must be prioritized to attain success. But what do we do if the proverbial bridge doesn’t exist?

Data is the raw material of the bridge to cross when we get there
click to tweet that

When that bridge is a strategic decision, it should be made out of data that spans successes and failures, seasonal changes, political and environmental cycles. This doesn’t undermine intuition or automate the decision; rather, it provides an objective background to guide and support the decision. The challenge is that many organizations are crossing a bridge that doesn’t even exist. They are trying to make informed strategic decisions with sub-optimal data, and it’s usually because one of the following three data strategy considerations has been overlooked.

Discover Valuable Sources of Data

Even in organizations where data from all applications & ERPs are not integrated, most have found a way to explore the data through built-in reports or tools that directly access the data in the database. While this has its pitfalls (like grinding a production system to a halt), at least they are aware of the value of that data and it is used when making important decisions. However, there are many sources of data that are overlooked for their potential, including sensors used in operational feedback loops, external sources such as weather, and unstructured sources such as excel sheets for operational reports and even emails.

Emails are a great example of unstructured data because it is usually stored for many years if not indefinitely and when aggregated and siphoned through a machine learning algorithm, can yield some very interesting insights. For example, by using a sentiment analysis on past emails, an organization can get an objective measure of morale over time. This can be highly valuable when planning certain activities that are perceived as highly positive (bonuses, employee appreciations) or highly negative (layoffs, weak earnings). Let’s carry this idea a little bit further. If the objective measure of email sentiment analysis is run through a predictive analytics / machine learning algorithm, we might uncover a leading indicator about how morale shows a rapid dip right before the company loses a group of high-talent employees through attrition, or we might discover that sales always shows a spike after morale jumps instead of the other way around. Whatever the actual discovery, it’s reasonable to expect that individual actions will influence the entire group and therefore the outcome of the business.

When planning long-term data strategy, it’s easy to ignore additional data sources using anecdotal arguments. “There’s just too much data to transmit and store” is common for raw sensor data; “That information is outside our control” is typical for external sources of data; and “It will be too expensive to clean up and organize” is usually argued for unstructured data. The truth is that most of these perceived hurdles have been addressed by technology advancements. Don’t casually ignore these sources of valuable data without objectively considering them as part of your data strategy.
Click here for part two!

 

 

 

]]>
/three-data-strategy-oversights/feed/ 0
Why would IT need an old-time Puppet? /need-old-time-puppet/ /need-old-time-puppet/#respond Tue, 21 Mar 2017 20:32:30 +0000 /?p=1388 by Carlos Herrera

March 21, 2017

In this ever-growing and ever-changing world of IT, one reoccurring question we ask is: How can we keep up with the constant changes in technology,  so we can focus our time where it matters most `for the organization?

After spending countless hours troubleshooting in different technologies, operating systems, programming languages, etc. finding those nasty needles in all those haystacks, most often caused by human error, the same question kept coming back to my mind… why are we still depending on manual work when we are so deep into the so-called ‘Digital Era’?

One of the reasons I chose IT back in 1989, was that the computer was the tool of choice which we could use to make work more efficient, predictable, repeatable and even more accurate than manual work could ever achieve. At the end of the day businesses need their processes to be repeatable with consistent quality output; this, in turn, will spark that so-called continuous improvement process we all want.

Imagine if someone or something could use a computer and obtain those repeatable outputs over and over again (with the required quality level of course). That would be a robot, not a human being, but then again, what if we had control over that robot? That is the concept behind Puppet Labs’ brilliant idea: An automated ‘Puppet’ that you can control to achieve high-quality, repeatable steps. Every. Single. Time.

In today’s digital world we have way too many tools to get the work done. Imagine taking control of all those tools and eliminating manual work, if not completely, to a bare minimum. Puppet achieves exactly this. Using a simple language, you can instruct your automated Puppet to do the work for you, in a consistent and repeatable manner; eliminating the configuration drift in your infrastructure; therefore freeing up your valuable people resources. People are no longer required to do the tedious, repetitive work, and hence freeing up their time for building better solutions to continue to enable your business.

Have a variety of platforms in your infrastructure farm? No problem! Puppet works under the usual *nix, Windows, Solaris, AIX, Ubuntu/Debian… the works. Therefore, control over each platform thru a single point or “Puppet Master,” becomes a reality. The newest release even comes with fault-tolerance capabilities built right in.

If you are more of the prove-it-to-me or hands-on kind of person, I highly recommend getting the Training VM for Puppet Enterprise. The easy step-by-step instructions will guide you thru an overview of all the powerful features Puppet has to offer, while working in a full-blown environment where you can quickly grasp how the Puppet pieces would work together nicely with your unique infrastructure.

If you want to make many fast changes instead of manual changes to your infrastructure in a repeatable fashion, use a batch or shell script… seriously. When you want to get real and manage many servers at the same time, that is when Puppet becomes a very handy tool.

So, do we need a puppet in IT? I think we do; since NASA, Intel, Disney, HP, Sony, Uber think the same…you get the picture.

* Groundswell is not a reseller of affiliated with ‘Puppet Products’ in any way. This is a personal review of useful tool we have encountered in the market place.

 

]]>
/need-old-time-puppet/feed/ 0
Big Data, a Peek Under the Hood /big-data-peek-hood/ /big-data-peek-hood/#respond Tue, 21 Mar 2017 15:25:10 +0000 /?p=1374 Conference Banner correct

You Are Invited!

Big Data has become a bit of a buzz word, but there is no doubt that the transformation of ‘data’ is giving businesses the cutting edge advantage that often translates to a healthier bottom line. In the coming era of data lakes and pervasive business intelligence, what can executives do now to be ready? Whether you have already begun your data journey, or are only recognizing the demand, this thought-provoking afternoon will educate on the possible potholes and roadblocks of supercharging your data.

 

Event Summary

Big Data, a Peek Under the Hood, a ½ day multi-vendor conference with some of the biggest names in Data Transformation. Join your data on a journey from inception to transformation as Groundswell, and its partners educate on how to use the data you have, and the data you don’t think you have to drive your business into the future. This compelling afternoon will focus on practical solutions for business, and how to use your data to get from where you are, to where you want to be.

 

 Who Will Benefit From Attending?

IT leaders and established Information Management practitioners needing to understand what’s next in advanced BI and data integration, as well as business leaders who want to learn how data can be turned into actionable insights that drive top line performance, bottom line results, and reduce corporate risk.

 

Calgary

When
Thursday May 4, 2017
11:30am – 6:30pm

Where
Hyatt Regency Calgary
700 Centre St S.E., Calgary, Alberta T2R 0G8, Canada

Vancouver

When
Thursday May 11, 2017
11:30am – 6:30pm

Where
Hyatt Regency Vancouver
655 Burrard Street, Vancouver, British Columbia V6C 2R7, Canada

 

For more information regarding this event or to inquire about registering, please contact us here.

 

 

]]>
/big-data-peek-hood/feed/ 0