The Cognitive Stairways of Analysis

by: Nicole Hoffman

TLDR I Introduce a series of analytic process models from various industries and use key takeaways to create my own framework called the Cognitive Stairways of Analysis. 

Introduction

Analysis. You might hear this term all the time. What does it really mean? How do you analyze data? Unfortunately, this is something I had to sort out on my own when I landed my first info sec job as a cyber security analyst intern. I have learned a lot since that day, but I still feel there is a huge gap in training when it comes to analysis.

So, I wanted to take a deeper dive into the tradecraft of analysis. As I researched the topic of analysis, I found myself confining myself to cyber threat analysis specifically. I found a lot of great information, but the data I was finding was repetitive and vague. Some analytic frameworks I stumbled upon had analysis as a step but did not really go into what someone does during that step. 

So, I decided to expand my search to figure out how other industries are performing analysis. Information security professionals are not the only ones performing analysis. This blog post will focus on a few of the analytic models I found during my research. 

Cognitive Interpretation of Data Analysis

The first analysis process I would like to discuss came from a wonderful white paper titled A Cognitive Interpretation of Data Analysis by Garrett Grolemund and Hadley Wickham that was published in August of 2012. The paper compares data analysis to the process of sensemaking. 

What is sensemaking? The human mind “creates and manages internal cognitive structures that represent aspects of external reality. These structures consist of mental models and their relationships” (Grolemund & Wickham, 2012). The authors mention these mental models go by several names including schemas. “A schema is a mental model that contains a breadth of information about a specific type of object or concept” (Grolemund & Wickham, 2012).

Humans cannot remember every observation they experience, so the brain maintains the schemas. The schemas are organized in the brain in something called a semantic network. “In this way, the mind uses schemas and semantic networks to construct our perception of reality from limited sensory input” (Grolemund & Wickham, 2012). The process of maintaining the schemas is sensemaking. 

The following model is a summary of the sensemaking process. When experiencing an event, the brain attempts to find a relevant schema in its semantic network. Any observations within the event that do not fit with a schema are known as a discrepancy or insight. When an insight is discovered, the brain does one of two things. It either updates one of its schemas or determines the observation is untrustworthy and ignores it. 

Sensemaking Model (Grolemund & Wickham, 2012)

The authors claim data analysis is a sensemaking task. “It has the same goals as sensemaking: to create reliable ideas of reality from observed data. It is performed by the same agents: human beings equipped with the cognitive mechanisms of the human mind” (Grolemund & Wickham, 2012). 

Honestly, I could not agree more with Grolemund and Wickham. Analysts regularly create hypotheses either before or during analysis and attempt to test the validity of the hypotheses. When discrepancies are found that do not align with the hypothesis a few things can occur. A new hypothesis could be created in addition to the original hypothesis, the hypothesis could be updated slightly, or the information could be omitted as useless. I feel as though I am oversimplifying or missing something, but I am just making a general comparison. 

Furthermore, the authors apply the sensemaking process model to exploratory and confirmatory analysis. Exploratory analysis is a form of analysis that begins with a dataset and no preconceived assumptions or hypotheses about the data. An underlying structure, or schema, is sought after to make sense of the dataset. Confirmatory analysis, on the other hand, begins with a hypothesis, or schema, and attempts to find relevant data in a dataset to validate it. 

Exploratory and Confirmatory Analysis Models (Grolemund & Wickham, 2012)

The authors created a conceptual step by step model of the process of both exploratory and confirmatory analysis.

A generalized exploratory task proceeds as follows:

1. Fit a tentative model to available data 

2. Identify differences between the model and data 

3. Judge whether the differences suggest that the model is misfit, overfit, or underfit (discrepancies) 

4. Retain or refine the model as necessary 

5. Select a plausible schema that interprets the model in the context of the research 

A generalized confirmatory task proceeds in the opposite direction: 

1. Select an appropriate schema to guide data collection.

2. Deduce a precise hypothesis from the schema. Multiple hypotheses may be developed to test multiple aspects of the schema. 

3. Identify the set of data that would be relevant for testing the hypothesis 

4. Collect a representative subset of the data. 

5. Identify differences between data and hypothesis 

6. Judge whether the discrepancies imply a meaningful difference between the hypothesis and reality or result from random variation or faulty data 

7. Confirm, update, or reject the hypothesized model (and its associated schema

(Grolemund & Wickham, 2012) 

Parallels Model (Grolemund & Wickham, 2012)

“Data analysis parallels sensemaking. Analysts deduce a precise hypothesis (model) from the schema, which they compare to the data or a transformation of the data. Analysts must attempt to distinguish discrepancies between schema and data from differences that result from variance and bias. Analysts must also match each accepted model back to a schema to provide interpretation in real world concepts.” (Grolemund & Wickham, 2012)

For example, if a device were experiencing pop ups I would assume, due to my experiences, that the device had a form of adware. In this example, adware would be the schema. My first hypothesis would be the user clicked on a malicious link on a website.

After collecting data to validate this hypothesis, I found the user did not click on a link but downloaded a new toolbar. This information does not align with my hypothesis but does align with my schema. The data would be confirmed, and the hypothesis updated.   

Christopher Chatfield’s Statistical Investigation Process

Dr. Christopher Chatfield is a retired Reader in Statistics in the Department of Mathematical Sciences in England. Chatfield authored four textbooks including:

· The Analysis of Time Series: An Introduction with R

· Introduction to Multivariate Analysis

· Problem Solving: A Statistician’s Guide

· Statistics for Technology

I am going to be focusing on a statistical investigation process Chatfield created which is published in the Problem Solving: A Statistician’s Guide textbook. The book “sets out to clarify the general principles involved in tackling real-life statistical problems. It is aimed at students and practitioners who have studied some basic theory but are unsure what to do when faced with real data, especially if the data are ‘messy’ or the objectives are unclear.” (Chatfield, n.d.)

The statistical investigation process is broken down into seven stages:

1. Understand the problem and clarify objectives

2. Collect data in an appropriate way 

3. Assess the structure and quality of the data, i.e, clean the data 

4. Examine and describe the data 

5. Select and carry out appropriate statistical analyses 

(a) Look at data 

(b) Formulate a sensible model 

(c) Fit the model to the data 

(d) Check the fit of the model  

(e) Utilize the model and present conclusions 

6. Compare findings with further information, such as new data or previous findings 

7. Interpret and communicate the results

This model is fairly straight forward. I really enjoyed this model because it included the step of cleaning the data. Raw data can be difficult, but not impossible, to analyze. Particularly when you are collecting data formatted in different taxonomies or formats. It makes it difficult to query intelligently and it is easy to miss things. 

In my opinion, this methodology is taking an exploratory approach to analysis. The process begins with a dataset instead of a hypothesis, or model. The model is created after exploring the data and looking for an underlying structure or relationship, also known as regression analysis. Regression analysis occurs when you attempt to find a relationship between two variables or an underlying structure.

Looking at this model I wonder what an analyst would do if you formulated a model, but the data does not match the model. Sometimes there needs to be a cyclical approach to analysis, or an if -then style because you aren’t always going to formulate the correct model or hypothesis on the first try.  

The OSEMN Framework

OSEMN is a popular model, or framework, of the organization of research in the field of Data Science. The acronym stands for Obtain, Scrub, Explore, Model, and Interpret. Once again, we have another exploratory approach with the creation of the model post analysis. I thought this was a particularly interesting model due to the fact that during the cleaning, or scrubbing, process the goal is to ensure the machine understands the data and not an analyst. 

OSEMN Process (Lau, 2019)

Criminal Intelligence Analysis

The next model is from a paper titled How Analysts Think: Think-steps as a Tool for Structuring Sensemaking in Criminal Intelligence Analysis by Nallini Selvaraj, Simon Attfield, Peter Passmore, and William Wong. The Model of Police Operational Intelligence Analysis is broken down into three stages: prepare, analyze, and report/advise. 

What I found particularly useful about this model is that each stage is broken down into additional steps. There are too many analysis models that list analysis as a step without elaborating on what that means. We cannot as a community assume everyone has the same definition of analysis.

Model of Police Operational Intelligence Analysis (Selvaraj et al., 2016).

In addition, this is the first time I have heard the phrase think steps. “Think-steps can be a template for a generic crime, but they can also be things that are drawn from the investigation about the case which I consider are specific areas to consider” (Selvaraj et al., 2016). Furthermore, the authors go on to say the think steps “provide a template that enables the analyst to “approach the case”, decompose it into separate elements and classify associated data accordingly” (Selvaraj et al., 2016).

In other words, the criminal analysts are attempting to choose a schema, or multiple schemas, to match the data to. For criminal intelligence analysts, the schemas are the crimes such as murder, burglary, and human trafficking. Each crime has its own set of think steps. In Information Security each type of malware is going to have different think steps associated with their analysis. 

The idea of ‘think’ steps is one of the best pieces of analytical advice I have ever received from any piece of literature. I am so glad I stepped out of my comfort zone to determine how other industry professionals are performing analysis. 

Business Analysis Model

The next model came from an article by Michael Coveney titled, ‘Business Analytic’ Model Lift Cycle. There are a lot of different models for business analysis, but I was particularly fond of this one because of the final step which is monitoring the model performance. This can be particularly useful when an alert or issue jump starts your analysis.  

Business Analysis Model (Coveney, 2020)

For Example:

  1. Pop ups on device (Define what is being investigated)
  2. Gather specific event logs from device (Collect relevant data)
  3. Determine adware is present on the device caused by a toolbar downloaded by the user (Model / analyze the data)
  4. Further analysis showed the cause was actually a browser extension with toolbar in the name. (Adapt the model) 
  5. The organization used this incident to add a policy regarding browser extensions in addition to additional security controls to prevent their download.
  6. The analyst can continue to monitor the performance of the policy and applied security controls. (Monitor model performance) 

In Information Security, the analyst is not always going to be the one monitoring the performance of a control or policy, so I did not end up using it in my models of analysis I am going to discuss later in this paper. However, this does not take away the value in any way and it will remain an honorable mention.  

Scientific Method

The Scientific Method is a method of procedure that has characterized natural science since the 17th century, consisting of systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses. (Oxford) If you have been through grade school you are probably familiar with the method. To me, it is one of the original methodologies of analysis that spans across several industries. 

When I think about the scientific method, I think about science projects I completed over the years. To me, the process was to create a hypothesis, perform an experiment, and determine if the hypothesis was correct or not, and then present the results. If my presentation required a physical demonstration, such as a volcano, I made sure the process was repeatable. No one lives to flop a live demo. 

The Scientific Method (Scientific Method, 2018)

At the time, I did not realize the repeat portion of my methodology was actually a part of the scientific method. It was not just my way of preventing embarrassment. I wanted to emphasis this step because I think it is something, we can forget in Information Security. It goes back to that age-old saying trust but verify.

This is where Confirmatory Analysis comes in. Confirmatory Analysis is the form of analysis where you put your hypothesis to the test by finding relevant data in the dataset for validation. As the graphic shows, this does not have to be a one-person job. I know I have asked for help from peers and coworkers to validate the results of my analysis.  

Medical Diagnostic Process

I really wanted to add a medical analysis example, but I have to say it was very difficult. I was briefly in the medical field, so I have an idea about the process a physician takes when diagnosing a patient. I knew there had 

to be an analytic methodology similar to the model of sensemaking that we discussed earlier. In my mind, a doctor would either take an exploratory approach or a confirmatory approach depending on the situation and severity of symptoms. This is once again my own opinion of what could potentially occur during the diagnostic analysis. 

Confirmatory

  1. Collect data from the patient about the medical issue
  2. Collect data about the patient’s vitals and history.
  3. Find a schema or diagnosis that matches the symptoms and create hypothesis
  4. Order additional tests to validate hypothesis (confirmatory analysis)
  5. Disclose the diagnosis to the patient

Exploratory

  1. Collect data from the patient about the medical issue
  2. Collect data about the patient’s vitals and history.
  3. If no schema is found, gather additional information to create hypothesis (exploratory analysis).
  4. Consult with other physicians (perhaps they have a familiar schema that matches the dataset)
  5. Disclose the diagnosis to patient

I think I was partially correct in my assumptions, but when I found the following model, I realized there are a lot of things I did not consider. Every physician is different in how they arrive at a diagnosis. I am going to discuss a diagnostic process model I found in the book titled Improving Diagnosis in Health Care

As you can see in Figure 8, this model takes a cyclical approach to the data collection and analysis phase. This includes collecting data from the patient, interpreting the data from the patient into medical terminology, and creating a working diagnosis. I forgot two important parts in the process when I created my interpretation which are the physical exam as well as the treatment plan.

You can collect and analyze all the data you want, but a physical examination can add a plethora of information that you could not otherwise obtain. Notice the treatment plan cycles back to the information gathering stage. In the event the treatment plan is not working, the physician would then continue collecting data to determine why. This can include throwing out the original hypothesis and creating a new one. 

Medical Diagnostic Process (Erin P. Balogh, Bryan T. Miller, and John R. Ball, Editors; Committee on Diagnostic Error in Health Care; Board on Health Care Services; Institute of Medicine; National Academies of Sciences, Engineering, and Medicine, n.d.)

This reminds us of the business analysis process where you monitor the performance of the model, or for our example the security controls that were implemented to prevent users from downloading browser extensions. I discussed what type of policies and controls to implement to prevent the incident from occurring in the future, but I did not think about how to contain and eradicate the actual problem which was adware. 

In medicine there are some illnesses that have similar symptoms. I remember one time I went to the hospital in a massive amount of pain. The doctor concluded it was my appendix. He gave me meds to control the pain and was sent off to do a scan to confirm. The scan confirmed it was a cyst that had ruptured and not my appendix. Could you imagine, though, if the doctor had not confirmed his diagnosis and treated the wrong issue? It could have been bad. Or if he thought it was a cyst and sent me home, but it was actually my appendix. Scary stuff. 

Historically, malware has a way as masquerading as something other than what it is. For example, in 2017 there was a new variant of the ransomware Petya discovered called NotPetya. Ransomware typically encrypts user data before demanding a ransom for the decryption key. NotPetya, however, was created to look like ransomware, but it actually had wiping capabilities making the victim devices unbootable. 

Not every example is going to be a nation state level threat, but this is a great example that not everything is always as it seems. When you treat issues in your environment ensure the treatment works just like in the medical diagnostic process.

Weather Forecasting Process

I am actually a huge meteorology nerd. I grew up wanting to be a meteorologist for a while because I was and still am fascinated by storms. So of course, I had to include an analysis process of weather forecasting. Although, the model I found is actually a workflow for forecasting weather it still works because we get a glimpse of the type of analysis that occurs. 

Simple Weather Forecasting Workflow (Tsahalis et al., 2013b)

The process steps are as follows:

  1. There is a geophysical environment
  2. Observations are made over a period of time
  3. The data is collected from sources and processed for analysis
  4. Next, the analysis and assimilation stage. I am going to be honest I had to google the word assimilation because I did not know it off hand. Assimilation is the process of taking in and fully understanding information or ideas. (Oxford). 
  5. Mathematical-Physical models of numerical weather forecast are created.

The process can now go one of two ways. The first path starts with an exploratory analysis methodology whereas the second path begins with confirmatory analysis methodology and forgoes the exploratory analysis all together. 

Exploratory

  • Direct Model Output (DMO) Statistics are performed. (Exploratory Analysis)

 “Model Output Statistics is an objective weather forecasting technique which consists of determining a statistical relationship between a predictand and variables forecast by a numerical model at some projection time(s). It is, in effect, the determination of the “weather related” statistics of a numerical model.” (Glahn & Lowry, 1972)

  • A Statistical Post Processing (such as Kalman filtering). Kalman filtering is a complex, statistical algorithm. (Confirmatory Analysis)
  • Subjective Interpretation by Meteorologists
  • Weather forecast for users. 

Confirmatory

  • Statistical Interpretation (Confirmatory Analysis)
  • Subjective Interpretation by Meteorologists
  • Weather forecast to users

I understand there was a lot of meteorological jargon mixed up in that example, but it possible to say that meteorologists have a clear and concise process of analysis when forecasting weather.

After analyzing this process, I had a few takeaways. First, it was the only model to establish an environment which I thought was interesting and important to remember specifically when determining the scope of an analysis.

Finally, I like that there is a subjective analysis with the Meteorologist based upon the visualizations and data provided by an analyst. To me, this seems like an additional layer of confirmatory analysis which is never a bad thing.

The Cognitive Stairways of Analysis

Throughout my research for this blog post I learned so much and found so many great analytical models. However, I finished my research and wanted more. There were key elements I wanted to combine into a single model, or framework, of the analysis. So, I did! I created the Cognitive Stairways of Analysis. When I say stairway, I am merely relating a step-by-step process to a stairway.

While there are some optional cycles within the stairways, most of the time when you are going up a stairway you have an end goal in mind. The end goal when analyzing datasets in Information Security is to be able to disseminate or interpret the findings in an intelligent manner.

Currently, there are three stairways based upon certain starting points that you might experience in Information Security. I am hoping to add additional stairways in the future, but I feel as though this is a great starting point to propose this new analysis framework. 

The Cognitive Stairways of Analytics 1: Alert

Step 1 -Receive Alert

The first stairway begins with an alert. The alert can come from a security tool or by word of mouth, such as a staff member informing you of a pop-up problem. When you begin with an alert, you are given a potential problem that you need to solve. 

Step 2 – Determine Scope

The scope, or goals, of the analysis need to be determined in addition to the environment or sources you will need for your analysis. You do not want to pull the logs of the CEO’s device if the incident involves a Human Resources staff member.

Typically, though, when it is an alert from a security tool you will be able to acquire that data in the alert such as which device is experiencing the issues. Remember, the weather forecast workflow taught us it is important to understand what environment you are collecting observables from. 

Step 3 – Compile Data / QoI Check

Once you know what data sources you need you can start to compile the data. Once the data is compiled you can do what is called a Quality of Information Check, or QoI. A QoI evaluates the completeness of the information available as well as the data sources.

This check is important because it can identify information gaps. If you discover an information gap a new information or intelligence requirement can be created. In addition, it can help boost confidence levels of analytic decisions. 

Step 4 – Clean Data / Omit Useless Data

When I say clean the data, I mean to ensure the dataset is organized in a common taxonomy. It can be extremely irritating when this does not occur and could result in an incomplete dataset. For example, if you have the data field listed as San Diego in a few records and SD in a few others and SDCA in a few others it can get confusing quickly.

If you go to query all the logs from the SD office, you will not get the results from the records with San Diego or SDCA listed. This is also the time to omit, or get rid of, any useless data that is not important to your investigation. 

Step 5 – EDA / Visualize and Regression 

Exploratory Data Analysis (EDA) is a form of analysis where you are given a dataset, but not necessarily a hypothesis or data model to match it to. In this form of analysis, you explore the data in order to generate a hypothesis. This is also the time to visualize the data and perform Regression Analysis. Regression Analysis is when you attempt to find relationships between variables in a dataset. 

Step 6 – Generate Hypothesis / Think Steps

Once you perform the EDA and Regression Analysis you should be able to generate a hypothesis. If you remember back to Model of Police Operational Intelligence Analysis, there was a step called Think Steps. Every hypothesis should have a list of steps. For example, if your hypothesis is that you believe that someone is attempting to brute force an admin account a think step might be to go look at the Windows Event 4625 (failed login) over the last half hour. The Think Steps are the steps you would take for each issue. Determining your think steps will help speed up your confirmatory analysis. 

Step 7 – Confirmatory 

Confirmatory Analysis is when you put your hypothesis or hypotheses to the test using the Think Steps. In the event that you are unable to validate your hypothesis you can start again at Step 5 with further Exploratory Analysis. 

Step 8 – Disseminate

This is the single most important step in the stairway. It is the end goal which is dissemination. This is where you conclude your analysis and interpret your results. This can be in the form of a report or just a note in ticket describing what your findings are and how you came to that conclusion. 

The Cognitive Stairways of Analytics 2: Brainstorm

Step 1 – Brainstorm / Generate Hypothesis / Think Steps

The second stairway begins with a brainstorming session. Does your CISO ever ask if you are susceptible to the new threat he saw on the news? This stairway is for those moments too because most likely you are going to then brainstorm the likelihood by looking up the threat. This step is also for the generation of a hypothesis and Think Steps.

Step 2 – Determine Scope

The scope, or goals, of the analysis need to be determined in addition to the environment or sources you will need for your analysis. Remember, the weather forecast workflow taught us it is important to understand what environment you are collecting observables from. 

Step 3 – KAC / Devil’s Advocate

Key Assumptions Check (KAC) is an analysis technique where you list and review any assumptions regarding a topic. Once you list the assumptions you determine the likelihood of each. It is a great way to flush out any biases you might have regarding the hypothesis, or topic. It is also a great time for the Devil’s Advocate technique. This technique is when you attempt to think of any possible alternative to the topic at hand, or in this case the hypothesis. 

Step 4 – Compile Data / QoI Check

Once you know what data sources you need you can start to compile the data. Once the data is compiled you can do what is called a Quality of Information Check, or QoI. A QoI evaluates the completeness of the information available as well as the data sources. This check is important because it can identify information gaps. If you discover an information gap a new information or intelligence requirement can be created. In addition, it can help boost confidence levels of analytic decisions. 

Step 5 – Clean Data / Omit Useless Data

When I say clean the data, I mean to ensure the dataset is organized in a common taxonomy. It can be extremely irritating when this does not occur and could result in an incomplete dataset. For example, if you have the data field listed as San Diego in a few records and SD in a few others and SDCA in a few others it can get confusing quickly. If you go to query all the logs from the SD office, you will not get the results from the records with San Diego or SDCA listed. This is also the time to omit, or get rid of, any useless data that is not important to your investigation. 

Step 6 – EDA / Visualize and Regression

Exploratory Data Analysis (EDA) is a form of analysis where you are given a dataset, but not necessarily a hypothesis or data model to match it to. In this form of analysis, you explore the data in order to generate a hypothesis. This is also the time to visualize the data and perform Regression Analysis. Regression Analysis is when you attempt to find relationships between variables in a dataset.

Since you already have a hypothesis, this step is OPTIONAL. You could move on straight to Confirmatory Analysis. Personally, I still like to ensure I am correct by looking over the data once more. This is a case-by-case opinion though. Sometimes, you just know. However, in the event you do decide to perform EDA and you find discrepancies that disprove your hypothesis you can go back to square one or update your hypothesis as needed before moving on to Confirmatory. 

Step 7 – Confirmatory 

Confirmatory Analysis is when you put your hypothesis or hypotheses to the test using the Think Steps. In the event that you are unable to validate your hypothesis you can start again at Step 5 with further Exploratory Analysis. 

Step 8 – Disseminate

This is the single most important step in the stairway. It is the end goal which is dissemination. This is where you conclude your analysis and interpret your results. This can be in the form of a report or just a note in ticket describing what your findings are and how you came to that conclusion. 

The Cognitive Stairways of Analytics 3 – Red Team Analysis

The final Cognitive Stairway of Analysis is similar to the others, but it begins a bit differently. So, to cut out the monotony I am not going to go through each step. Stairway 3 can start 1 of 2 ways.

You can start with a Red Team Analysis, where you attempt to put yourself in the attacker’s shoes, or it can begin by determining the scope and environment for the observables to be collected from. The rest of the steps are identical to Stairway 2 with an optional EDA step that could lead back to a new hypothesis. 

The reason I have an alternative beginning is because I have experienced both scenarios. I have experienced a manager asking someone to see how secure a particular area of the network is. So, the scope and environment were assigned before the Red Team Analysis began.

Other times I have been in a situation where I just ask myself if I could exploit something I witness in my environment. In this scenario the Red Team Analysis would start the analytic process. Note: I am by no means a salty red teamer. This is just based on my experiences.

In conclusion, I was motivated to write about the analysis because it was something I had to figure out the hard way. Everyone talks about analysis, but no one goes into detail about how to perform analysis. I wanted to create a framework that has a step-by-step process for analyzing data. These stairways do not have to take a long time.

In time it will become a mental check list that you run through like the Think Steps. I know the biggest challenge analysts face is a lack of time. If you are in an operational role, you do not have time to perform more complicated analytic techniques that require charts and spreadsheets. You need something intuitive, easy, and fast. I hope the Cognitive Stairways of Analysis can provide that. I also hope that I can add more stairways in the future. 

Resources

Chatfield, C. (n.d.). Web page for my Problem-Solving text. Christopher Chatfield. https://people.bath.ac.uk/mascc/PS.html 

Coveney, M. (2020, September 28). “Business Analytic” Model Life Cycle. FP&A Trends. https://fpa-trends.com/article/business-analytic-model-life-cycle 

Dineva, K., & Atansova, T. (2018, January). OSEMN PROCESS FOR WORKING OVER DATA ACQUIRED BY IOT DEVICES MOUNTED IN BEEHIVES. Research Gate. https://www.researchgate.net/publication/326842332_OSEMN_PROCESS_FOR_WORKING_OVER_DATA_ACQUIRED_BY_IOT_DEVICES_MOUNTED_IN_BEEHIVES 

Erin P. Balogh, Bryan T. Miller, and John R. Ball, Editors; Committee on Diagnostic Error in Health Care; Board on Health Care Services; Institute of Medicine; National Academies of Sciences, Engineering, and Medicine. (n.d.). Read “Improving Diagnosis in Health Care” at NAP.edu. The National Association of Sciences, Engineering, and Medicine. https://www.nap.edu/read/21794/chapter/4#32 

Glahn, H., & Lowry, D. (1972, December). The Use of Model Output Statistics (MOS) in Objective Weather Forecasting. National Weather Service. https://www.weather.gov/media/mdl/Glahn_1972.pdf 

Grolemund, G., & Wickham, H. (2012, August). A Cognitive Approach of Data Analysis. ResearchGate. https://www.researchgate.net/publication/261567457_A_Cognitive_Interpretation_of_Data_Analysis 

Lau, C. H. (2019, January 10). 5 Steps of a Data Science Project Lifecycle – Towards Data Science. Medium. https://towardsdatascience.com/5-steps-of-a-data-science-project-lifecycle-26c50372b492 

Scientific Method. (2018, July 11). Easy Peasy All-in-One High School. https://allinonehighschool.com/scientific-method/ 

Selvaraj, N., Attfield, S., Passmore, P., & Hong, B. L. W. (2016, August). How Analysts Think: Think-steps as a Tool for Structuring Sensemaking in Criminal Intelligence Analysis. ResearchGate. https://www.researchgate.net/publication/314265795_How_Analysts_Think_Think-steps_as_a_Tool_for_Structuring_Sensemaking_in_Criminal_Intelligence_Analysis 

Tsahalis, J., Tsahalis, H., & Moussas, V. (2013a, July). Optimization of a heterogeneous simulations workflow. ResearchGate. https://www.researchgate.net/publication/280939531_Optimization_of_a_heterogeneous_simulations_workflow 

Tsahalis, J., Tsahalis, H., & Moussas, V. (2013b, July). Simple Weather Forecast Workflow. ResearchGate. https://www.researchgate.net/figure/Simple-weather-forecast-workflow-2_fig1_280939531 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: