PowerBi – Making a Navigation Pane

Sometimes we want uniformity in our dashboards...

We want all our pages to have a common theme or useability that the customer or end-user can rely on. For me, I like my Dashboards to have a central Navigation Pane that allows the users to not only move through the different Dashboards but also provides a common slicing selection that can be used for most of my dashboards. Of course, there may be times where we need to alter this common pane to accommodate for different visualizations or slicing options, but for the most part the same structure can be used to great effect. 

To get started, create slicers for each filter selection you would like the end-user to be able to filter by. It’s important to note as well that if something I want to Slice against exists in both the Fact Table and the Dimension Table I will take it from the Dimension Table. The general rule that we want to follow is to almost always slice Facts against Dimensions. 

The slicers will make the basis of the Filter Header. 

Next, we need to construct the navigation pane that will house these slicers. I like to choose a basic setup of Navigation Arrows, one for going back one page and the other for going forward a page, a custom “Home” button and a “Q&A” button to allow the end user to ask questions directly of the data. These buttons are then placed on top of a text box to include the title of the page. If you lose your slicers behind the textbox, don’t worry we’ll cover how to edit Layer Order shortly. You may choose to go a different route, what I have found is there are a million different ways to accomplish the same task in Power Bi. If you choose not to have Navigation arrows that is ok, do what you feel will provide the required level of functionality for your dashboard. 

To create the custom “Home” button, simply navigate to Add a button and select a Blank button. This is quite a useful option as you can create a button with a custom image that is linked to an action within the dashboard. 

To link this newly created button to an action, click on the button and go to the Format button pane, the second last option is Action. 
 

 When the user clicks this button they will be taken to the PowerBi page titled “Home Page”. 

Now a bit of housekeeping

Adding some labels to the navigation arrows and changing the background to match that of the text box. 

Next, we need to change the slicer types to best fit in the navigation pane. In doing so I will also alter the year selector to only show the last 4 years as this data will be most relevant. 

Dropdowns for most of the selections will make for a clean easy to use solution, with buttons for the Years to add some variety. 

Now the issue we face is that the dropdowns have disappeared behind the yellow text box as seen below.  

This can be remedied by changing the Layer Order. This is found under the ‘View’ tab and ‘Selection’. 

Under this Selection menu, we can change the layering order of the elements we have within the Dashboard. We want to move the text box containing the title to the back and move the dropdowns to the front. This is done by clicking and dragging the elements in the side menu into the desired order. 
 

All that is left to do is to group all the navigation pane elements together to allow us to easily duplicate this to be able to copy and use this on our other dashboard pages. To do this in the Selection side menu select all elements of the navigation pane, group them and name the group to distinguish it from any other groups we may create in the future. 

We now have a complete Navigation Pane that we can utilize throughout the dashboard as we see fit. The Selection and Grouping functions are useful tools that can be mastered to create intricate, multi-layered dashboards and can save you a mountain of time when wanting to re-use large parts of a dashboard across the Power BI file or files. 
 
I hope you have learnt something new or improved your skills in Power Bi by reading this article. As always feel free to reach out on Discord if you have any questions or want to chat about Data. 

Power Bi – Back to the Future – The Time Dimension Table

In one of my previous RallyPoint Articles, Star Schema’s and Fact vs Dimension Tables, I introduced you to the Star Schema and the importance of Dimension Tables. In case you missed it or need a refresher, a Dimension Table is a static reference table used within a Star Schema Data Model to take the load off the main Fact Table (containing specific event information).  

In order to fully maximize the use of this type of Data Model

we need multiple reference tables (Dimension Tables) spread around the Fact Table, and one of the most required references is Time. Almost every Fact Table will contain some type of “time” reference containing time-specific data on when specific event/s occurred. While Power Bi does have built-in Date Splicing Functionality, we are limited to only slicing by Year > Quarter > Month > Day. It can also become difficult to compare rows between Fact Tables if there is no common Date Dimension to slice by. 
 
This issue can be solved quickly with an imported Power Query script specifically designed to give you your desired date range with the flexibility to drilldown.  

First download the script below:

After that, choose the PowerBi field you want the Time Dimension Table added to and add a Blank Query: 

Next, in the newly opened Blank Query, navigate to the View tab and click Advanced Editor. 

Following that, open the script in a text editor (such as notepad) and copy and paste the contents into the Advanced Editor. 

The only parts that require manual input are the FromYear and ToYear. Change these dates to suit your needs and click done.  


What you are left with is an extensive Time Dimension Table that acts as a centralized time splicer and can be utilized to provide in-depth analysis of your data! This may not seem important for your simple 2 table dataset but once you start to add complexity and additional data from other sources the Time Dimension Table pays for itself! 

 
As always if you have any issues with this, or anything else data related, feel free to reach out! 

christopher-jaames.dennis@withyouwithme.com 

Data Analytics – Web Scraping (No Programming Required)

If you are like me and have completed the WYWM Data Analytics Pathway...

but you want to keep your skills sharp, then you need to get your hands on some data and get to work. The question is though, where are you getting your Data? All the exercises and projects within the Pathway have the Data already organized and provided for you, it is a nice luxury to have but in the real world it is up to you, the Data Analyst, to find the Data required to make that exciting, thought-provoking dashboard. 
 
I am still quite a junior in the DA world, and I am ashamed to admit that my Python skills are almost non-existent. I am trying to learn, but it is just taking a little longer than anticipated. So, when I had the idea to find some data relating to the Property Market in West Melbourne, I found myself stuck. I searched far and wide for Data Sets that would fit my needs, alas to no avail. 
 
That is when I stumbled across a solution to my problem, Web Scraping. Web Scraping is the process of reading the underlying HTML (Hypertext Markup Language) code of a website, extracting the parts that we need, then attempting to format the extracted data in such a way for us Data Analysts to go about our work. Now, this may be off-putting if you do not have any experience with HTML, but the solution I chose to use makes the process as easy as clicking a few buttons. A word of warning though, Web Scraping is not considered illegal in Australia, however, it can be in breach of the website's terms and conditions of use. As a general rule of thumb, if you are not using the data for commercial gain and only personal use, you have nothing to worry about. 
 
The solution I chose was the Web Scraper – Free Web Scraping plugin for Chrome. This is a little-to-no code solution that allows us to set up our scrape by selecting which HTML elements we want the information from. 
 
The example I will be using is the West Melbourne Property Data, scraping from a real estate website. Some websites are quite protective of their data, evident by restrictions they put in place to stop people from extracting a complete dataset. The main issue I faced was that I could not view beyond 50 pages of property data and each page only contained 25 properties. To work around this, I set up multiple scrapes of more narrowed searches. For example, instead of searching for multiple suburbs, I would search for each individual suburb and do the scrape, then change the “sort” function and re-scrape to try and get a wider set of data. 

To get started

Head over to the Chrome Web Store and install the plugin here. After installing, navigate to the page you want to scrape and hit F12 on windows or to open the Developer Tools and click on Web Scraper. 

For the first step, let’s create the Sitemap by giving it a title and copying in the URL of the website that we want to scrape. Since we want to scrape from multiple pages, we will find the page number and replace it with square brackets and a range e.g. [1-50]. Since I know that Real Estate.com won’t allow us to search beyond 50 pages, I changed the URL to search for pages 1 to 50. 

Once the Sitemap is good to go, we then need to set up our selector. The selector is the element of the HTML that we are going to have the Web Scraper take from each webpage. To do this, click the “Add new selector” button. From here, give the selector a descriptive name and select the “Type” as “Text”, this will pull the actual text that the HTML represents on the webpage. The next stage is to select the HTML that we want to scrape. You would usually make a selector for each piece of information you want to scrape and tie this back to a Parent Selector, however, in this instance due to the way that Real Estate.com has formatted their HTML, it is easiest to scrape the whole ‘residential-card’ and then separate in the data cleaning process. Upon selecting the HTML, also remove the address identifier to tell the web scraper to scrape all instances of the div.residential-card rather than just the one that we have physically selected. Select “Multiple” and hit save. You can also hit the Data Preview button to make sure that you have indeed selected the correct data as intended. 

Now it is as simple as starting the scrape. The plugin will then open a mini-window and start querying the 50 pages we have selected.  

Upon completion, simply hit refresh in the original Web Scraper window (not the pop-up window) to display the data and then export as a CSV ready to clean. This sort of data is not normalized or uniform, so you will have to be creative with your cleaning. For this data, I found that not all listings had complete data. For example, some properties did not list a land size or have garages. I challenge you to try this out for yourself and see what sort of data you can scrape! Have fun! 
 
Thanks for reading and I hope this has a useful application in your Data Journey. Feel free to try it out yourself and reach out if you have any issues! 

SAP at WYWM - SAP ERP Data Visibility - How I saved my company millions.

Reach out to any of my managers in my previous job and they will tell you that my ability to use and analyse data in SAP ERP made them distinctly uncomfortable at times. Let me tell you why.

WithYouWithMe Chief Customer Officer (and total force of nature) Tom Larter loves to deliver what he calls Truth Bombs to customers. Truth Bomb No 3 - Focus on Skills over Controls - speaks to the tendency of Big Business to rely on control heavy Big Data Management practices. It's easier to teach employees how to follow rules than it is to change their ways of working. It's also easier to segregate and control access to data than it is to embed desirable data management practices in the company culture.

Excessive organisational controls around data slow down time to decisions, increase both time to insight and customer response – slowing down the whole business. In business - as in the military - anything that slows down response times to situations can put the organisation at a distinct disadvantage.

Digitally skilled workforces reduce time to decisions, speed up response to customers (or on the battlefield) and supercharge the ability of the team to respond to insight.  In the SAP world a digitally skilled individual is referred to as a Super User.

The SAP ERP Data structure in the multinational organisation I was working for was highly segregated and tightly controlled. SAP calls it Role Based Authorisation. In the military it's called "Need to Know" - if you don't need to know it you can't see it and you aren't allowed to make decisions on it. My job was in the office doing administration - about as low as you can get - with appropriately restricted data access and management authorisations.

Normally - someone in my role would have had very limited visibility on organisational data - hence a very restricted ability to act quickly in the interests of the business. The key difference in my situation was that my composite role required direct access into most of the SAP ERP Data Silos. I could see more than I should have been able to.

In practice - all of the controls around data visibility didn't apply to me. I could see through the silo walls and I wasn't afraid to look.

In many ways I had wider and more immediate visibility on the daily tactical business data than my Managers did.

Did that worry them? You bet it did!

Did I leverage that tactical advantage in the interest of the business? You can count on it.

Did I seek permission on every data driven decision? Nope.

Did they always like it when I used that data to hold them accountable? Nearly got sacked for it.

Did I defer to their position when my military logistics/project management experience showed me a better course of action? They wish.

Did I stay in my lanes? No fun in that and no advantage to the business profitability.

Did I use that visibility to clean up data and fix business processes? Oh hell yes.

Did I use that data to cut costs and reduce wastage? What do you think?

Was I a pain in the butt? Unquestionably.

Did all that improve the bottom line?

In the millions.

One person. Using the data available.

Train your people.

Want to join our community?

Dimensional data models – Fact Tables vs Dimension Tables

There comes a time where your data outgrows its model

This will become apparent quite quickly if your idea of a data model is just one super wide table. The concept of a data model is something that does take some getting used to, I still sometimes manage to get confused by it. A data model is a way of organizing your data so that, as your data set continues to grow over time, little-to-no manual actions need to be taken by the Data Analyst to incorporate it into the pre-existing file. There are a couple of different types of models that can be used, however the most common one is the Star Schema Model. 

The Star Schema is a desired model as it promotes usability, performance and scalability and allows for simpler DAX. Now you might be like me when I first heard the term ‘Star Schema’ and think to yourself “Sure those are some nice words Chris but why should I care? My data model works just fine the way it is right now” and sure you are ok for now, but what happens to your model once you add another 20k rows in a years' time? Will your model (or lack thereof) be able to handle it? 
 
The simple answer is no, even if it does hold up it will cause significant frustration throughout the process.  

As you can see in the above example when the data is not modelled the table is too wide, making it quite difficult to understand. 
 
The schema is modelled off, you guessed it, a star, with tables categorized as Fact Tables in the middle of the model and Dimension Tables on the points on the outer side.  

Fact Table is a table that contains specific event data, this can be transactional data such as sales data or appointment data. Each row in this table refers to an individual event, there may be hundreds or thousands of these entries in any given fact table. The information contained in the Fact Table should be specifically tied to the event, an example of this would be a Property Sales Table that contains relevant data tied to the sale of properties. Only information directly relating to the sale of the property is included. 

 

Dimension Table is a table containing information that is related to a business event, it is usually static in nature and does not change. These tables sit around the Fact Table and are used as a reference table. An example of this would be a Suburb Table. This negates the need to include all the information relating to the suburb in the Fact Table, making it longer, harder to read and slower for Power Bi to process the data. 

Dimension tables will include single unique references per row that refer to a column from within the Fact Table. It is also important to note that Fact and Dimension Tables almost always connect to the opposite table type, it would be extremely rare to see 2 Fact Tables or 2 Dimension Tables linked to each other. 

The model is created this way to allow the Fact Tables to continue to grow without this growth inhibiting how we use the data. The Fact Table will reference the Dimension Table when it requires further information that is not held within its own table, this increases the performance and usability of the data model. 

These two types of tables are linked together by what is known as a Surrogate Key

Surrogate Key is a unique identifier that is common between the two table types that is used to create the linkage between them. Fact Tables will contain multiple entries of the same Surrogate Key, whereas the Dimension Table would contain only one reference to the same key. The Fact Table would ‘look up’ the Dimension Table for the extra information weld within relating to that Key.  

The tables can then have different relationship types, dependent on the type of table and the direction of the relationship. For the above example instead of having all the Suburb information in the Property Sales Data Table, we use a separate Dimension Table called Post Code to obtain all the relevant suburb information like Postcode, Country, State etc. These 2 tables are linked by the Suburb Surrogate Key with the relationship being Many to One.  

There are 4 different relationships that the different tables can have with each other, these are: 

  1. Many to One - Many instances of the Surrogate Key on the first table and only one instance on the second table 
  1. One to One - One instance of the Surrogate Key on the first table and only one instance on the second table 
  1. One to Many - The same as Many to One but the other way around 
  1. Many to Many - Many instances of the Surrogate Key on the first table and many instances on the second table (it is recommended not to utilize this unless dictated by the complexity of the data model) 

You can also choose whether you want a relationship to transfer information in a single direction or both directions using the Cross Filter Direction setting. For example, if cross-filter directions are set to both, information can travel from table A to table B as well as from table B to table A.  However, if cross filter direction is set to single, information can only travel one way (from table A to table B OR from table B to table A, but not both)  

I hope this sheds some light on the distinct types of tables and their use within the Star Schema Model. It is something that seems foreign at first but once you put it into practice it will change the way you model your data forever. 
 
Next time I will show you how to develop your new Star Schema skills even further with the power of the Time Dimension Table. Feel free to reach out on Discord for any of your Data related needs! 
 
christopher-jaames.dennis@withyouwithme.com 
 

INTRODUCTION TO PYTHON JUPYTER NOTEBOOK

Ivan Josipovic – Data Analytics

What is Jupyter Notebook?

Jupyter Notebook is an open-source data analytics software which works in your internet browser.

It allows data analysts and scientists to create one single document which may be comprised of data visualizations, comments, math equations and other media. Thus greatly speeding up and enabling data to be processed, visualized, analyzed and reported on, all using one of the most vastly utilized programming languages in the world; Python!

How do I download and install Jupyter Notebook (for Windows)?

Simple really, follow along and lets do this!

1.         Google: anaconda python download.

2.         Click on this website:  www.anaconda.com

3.         Navigate to “Anaconda Individual Edition” download.

4.         For Windows, Click on Download Anaconda Individual Edition.

5.         Once downloaded, double click on the download and follow the prompt.

6          Once installed, in the search box next to the Windows start menu, type: Anaconda.

7.         Anaconda Navigator should show in the results. Click on it.

8.         Once open, there is a selection of tools and programs. Click on Jupyter Notebook.

9.         It should have opened up in your default internet browser, as a new tab.

10.       You will see a number of folders shown. Select the most appropriate folder where you wish to store all your Jupyter Notebook code documents. For ease of use, perhaps create a new folder on your desktop and then in the Jupyter Notebook browser tab, navigate to that folder.

11.       Click on New. Select Python, and a new tab will open with your new blank document.

LETS START WITH THE BASICS OF PYTHON IN JUPYTER NOTEBOOK

Variable Types

There are 4 types of variables we will be working with python in Jupyter.

These variable types are:

Integer also known as a whole number, in python it is represented as ‘int’.

For example lets assign the value of 2 to the variable x:

            X = 2

When we run x, it would return the value of 2.

(to run a line of code in Jupyter Notebook, hold SHIFT and press ENTER)

In other words we have stated that in the variable name of x, like a basket called x, we have placed the value of 2 within it. To be recalled whenever we say “x”.

In Jupyter, we can check the variable type simply by typing:

            type()

*in the brackets we would type in the variable name, so that we can check what type of value it holds!

Float is a number with a decimal place. In other programming languages it might be called a double. However, in Python it is called a Float.

For example lets assign the value of 4.5 to the variable y:

            y = 4.5

When we run y, 4.5 is returned!

String is a single character surrounded by quotation marks with the length of 1.

For example lets assign the string “hello” to the variable a:

            a = “hello”

and the string “there” to the variable b:

            b = “there”

we can test this out what this returns:

            a + b

(SHIFT + ENTER)

Would return ‘hellothere’.

But this looks messy! Ok no stress… Lets add something to our statement:

            a + “ “ + b = ‘hello there’ (please note, a space between quotation marks results in exactly that when added to variables)

Logical is a value defined as being either True or False.

For example:

            A = True

When we run A, we would get True.

            n = 4 > 5

This is saying the statement ‘4 is greater than 5’ is assigned to the variable n.

When n is run, we would get:

            False

Because we know that there is no way that 4 could be larger than 5.

Likewise if we run m:

            m = 10 > 3

We would get True in return!

So far we have been simply running the variable name in order to return its assigned value.

However, this is an incorrect way of doing so.

The correct way would be:

            print()

Whatever variable you wish to run and return, you would place in those brackets!

Working with Variables

How do we work with variables you might be asking..

Well much same like basic maths arithmetic.

            a = 10

            b = 5

            c = a + b         (addition)

            print(c) = 15

            d = a – b         (subtraction)

            print(d) = 5

            e = a * b          (multiplication)

            print(e) = 50

            f = a / b           (division)

            print(f) = 2

One can also add strings together! For example:

            greet = “hello”

            name = “Steve”

            message = greet + “ “ + name

            print(message) = hello Steve

Boolean Operators

These are the symbols or operators we use in a logical test statement to determine whether a value is TRUE or FALSE.

== equals

!= or <> not equal

< less than

> greater than

<= less than or equal to

>= greater than or equal to

and

or

not

Examples below show the use of Boolean Operators.

5 == 5  (five is equal to five)

= True

3 == 5  (3 equals 5 which is of course not true and returns a False)

= False

7 != 4   (seven is not equal to four)

= True

5 > 6    (five is greater than 6)

= False

3 < 4    (three is less than four)

= True

show = 4 < 5

show2 = not(6 > 2)    (not turns whatever is true to false and vice versa)

print(show2)

= False

show or show2          (true value for show or show2)

= True

show and show2       (and requires both called variables to be true)

= False

IF Statement

An if statement is used to execute code once, only if the statement is met by the designated True or False logic.

For example:

            apples = 23

            bananas = 46

            if bananas > apples:

                        answer = “There are more bananas than apples”

            print(answer)

This would return

            There are more bananas than apples

Because yes, 46 which is assigned to bananas, is a greater number than 23 which is assigned to apples!

IF ELSE Statement

The if else statement is used to return two possible outcomes once code is executed, depending if it meets the True or False criteria.

For example:

            apples = 23

            bananas = 46

            if apples > bananas:

                        answer = “There are more apples than bananas”

            else:

                        answer = “Check your eyes bud, there are more bananas than apples”

            print(answer)

This would return

            Check your eyes bud, there are more bananas than apples Because in the if condition, we were saying if there are more apples than bananas, do this. But there are more bananas than apples, so do that.

Nested Statements

A nested statement is exactly what it sounds like, a statement within another statement. To spare your eyes seeing an image of an  item within a nest, look to the example below.

apples = 23

            bananas = 23

            if apples > bananas:

                        answer = “There are more apples than bananas”

            else:

                        if apples < bananas:

                                    answer = “There are more bananas than apples”

                        else:

answer = “Stop counting fruit”

            print(answer)

This would return

            Stop counting fruit

Because apples and bananas were an equal count, and not greater than one another, hence the third answer returning.

Chained Statements

Chained statements are when you use if/elif/else flow controls, indented the same, to run code.

apples = 23

            bananas = 23

            if apples > bananas:

                        answer = “There are more apples than bananas”

            elif apples >= bananas:

                        answer = “Why are you counting fruit? Get back to work”

            else:

                        answer = “There are more bananas than apples”

            print(answer)

This would return

            Why are you counting fruit? Get back to work Because in the elif (python shortened version of “else if”) it says if apples are greater or equal to bananas, which yes, they are equal in number to one another.

While Loop

While loops are used to execute code, only if the condition met. This code is “looped” until the conditions are false, otherwise it runs infinitely.

Example:

(please note when commenting in Jupyter, precede all text with # symbol)

# while condition:                

#executable code

(also important to note is that code below while must be indented with one press of the TAB key)

count = 0        (0 is assigned to the variable count)

while count <= 12:    (while 0 is less than or equal to 12)

            print(count)   (then return count and run through the next line of code)

            count = count + 1 (whatever count returns, add +1 each time)

(keep running through loop until count is equal to 12) Returns:

1

2

3

4

5

6

7

8

9

10

11

12

STOP

For Loop

A for loop is used to repeat a sequence of code.

Example below:

for i in range(4):

            print(“This is a loop”)

Will return

This is a loop

This is a loop

This is a loop

This is a loop

Because for the variable i in the range 0,1,2,3 will print “This is a loop” four times.

Please remember that in Python, the very first index is 0 and not 1!

While you might count 1,2,3,4.

Python counts 0,1,2,3 which represents the range count of 4 places.

More lessons to come!

SAP at WYWM - My Rare SAP Skill Set

It’s no secret that I am mostly self taught when it comes to SAP. 

I first encountered the system when I started at job at Holcim Humes Australia which was titled “Production Works Clerk”. It was a bottom level data entry/administration role in a small country concrete manufacturing plant which was part of a large multinational specializing in quarrying and concrete products. The Humes subsidiary was the only part of the company which manufactured finished concrete products. Holcim (now Lafarge Holcim) focusses on quarrying and bulk concrete delivery in mobile agitator trucks.  

It has only been in the last twelve months or so that I have dived into formal SAP learning. I think I received maybe 1 week on the job introduction to SAP when I started in the role. Crucially – the person handing the job over to me still operated a paper based office and was clearly uncomfortable with the database. 

I remember I was introduced somewhat tentatively to the Graphical User Interface (GUI) version of SAP that the Company was running.  There was no way the other ladies in the office were going to be able to teach me much - they were too afraid of the database and both always stayed well inside their designated lanes. I won’t lie – the interface was confronting and somewhat clunky but I applied the veteran concept of “this is the tool I’ve been given – I'd better learn how to use it”. 

So I did. 

I’ve only just recently realised that the exposure I gained in operating SAP Enterprise Resource Planning (ERP) functions in that little country plant has made me somewhat of a rare animal when it comes to SAP.  

Let me explain.  

I am an Operator – a very good one – referred to as a Super User. I am the person who enters, uses and maintains the data all day, every day.  

Normally a person operating SAP within a company will only operate in one area of specialty. The person paid to do Payroll will only do Payroll. The person doing Inventory Management will have nothing to do with Purchasing or Dispatch. The person who takes sales orders might not even be in the same location as the person doing Production Entry. SAPs role based authorization system is designed to segregate incompatible roles. More on that in another article.  

My company ran the full SAP Enterprise Resource Planning Suite to varying levels.  

The topic list for SAP’s TS410 Integrated Business Systems Course (S/4 HANA ERP) looks like this: 

  1. SAP S/4 HANA Enterpise Management Overview 
  1. SAP Fiori 
  1. System Wide Concepts 
  1. Record to Report Processing 
  1. Hire to Retire Processing 
  1. Source to Pay Processing 
  1. Warehouse and Inventory Management 
  1. Design to Operate Processing 
  1. Lead to Cash Processing 
  1. SAP Project Systems 
  1. SAP Enterprise Asset Management 

We were a very small Plant so all of the office staff had multiple areas of responsibility and we all at one time or another swapped and shared roles. No one would have ever been able to take leave if we didn’t.  

My military training was in Army Road Transport Supply Chain Logistics and Human Resources Management. I was woefully underemployed in the role but that’s another story.  

Honestly – putting me in that job was like using a Main Battle Tank to eradicate mice in your loungeroom. Lets just say I became known (if not loved) for using the information in SAP to challenge the status quo and drive improvement. I even managed to make SAP sort of fall over more than once when I ran large scale searches for analysis.  

If you want to know how much I know about SAP ERP, or how my military experience applied to my SAP Operator roles – read on.  

Enough of the general information.  

Why is my SAP skill set rare? 

Because I made myself into a Super User in four of the branches of ERP. I had experience in three others thanks to my military training. I made it my business to learn the eighth (although a lot of it still goes over my head). I more or less did the Army Officer thing of wanting to know everything that was going on around me. I didn’t stay in my lanes and was constantly asking “Why has this gone wrong?” and “How can it be done better?”.  

It almost never happens in business where one person will operate more than one branch of SAP ERP. I’ve covered all of them to some extent. 

The sum of my experiences make me a rare animal in the SAP ERP Space. That’s why I get to mentor the WYWM SAP Squad Training Programme.  

Got questions?

Click here to ask the WYWM SAP Community on Discord

Building Blocks of SAP - Mel's Golden Rules

Got Questions?

Click here to ask the WYWM SAP Community on Discord

Build an LP Calculator for League of Legends in Excel

In this quick lesson, learn how to build an LP Calculator for League of Legends using Excel.
We'll use the following tools/functions:

Moving Averages for Time Series Data Analysis

Let’s look at how we can use moving averages to spot trends in time-series data.

Time series are one of the most common datasets that you will come across in your Data Analytics career. A time series data-set is a collection of sequential data that is recorded in time intervals.

Some of the many examples include:

One of the main challenges when working with a time series is that there can be strong fluctuations, or noise, in the data. This can make it difficult to spot trends and key features that would otherwise be hidden under these fluctuations.

In this video we use a simple moving average to explore the trend in a time series of historical business revenue using MS Excel. We then go on to use this moving average as a baseline to measure monthly revenue against.

You can find the data-set on Kaggle (some columns & rows were deleted in this example to simplify things): https://www.kaggle.com/podsyp/time-series-starter-dataset