Traditionally, the number of participants is the primary measurement of significance when you’re conducting a research study. But while volume of participation is key, it misses another dimension—the quality of that participation. Was the respondent overwhelmed by your questions, uncertain about their answers, or worse—bored and distracted?
We live in the age of big data. It’s a fast-moving, ever-growing ocean of information that can easily overwhelm those who are leading research projects. How do you gain access to the highest quality of data? And once you have it, what will you do with it?
In the first two articles, we discussed that the dominance of digital screens in our work and private lives is already “here and now” as opposed to being for the “future workplace”. We also explained that most screen-enhanced workspaces can be considered unique because the function of those spaces is truly in the eyes of the beholder, or in this case, the worker.
In this article, we want to share the strategy we take when designing workplace design surveys. Specifically, why we ask the questions that we do and the insights we’re able to capture by taking an indirect questioning approach.
There is a lot of data at our disposal these days. Lots of it. Along with all this data often comes feelings of “I am not getting what I need” or “it’s a lot of wasted time and effort”.
Good feedback, a willingness to understand, a desire to improve… These altogether lead to meaningful actions and positive changes.
Data jobs are “the sexiest jobs this century”. Data jobs are also among the hardest to positions to fill. The qualifications are so high that candidates have the leverage to “name their own price.”
Deep learning was prominent in the venture capital world of 2016 and rightfully so. This wave of excitement about AI and computing grew strong, because of a new-found comfort on letting unprecedented rich data guide progress. Interestingly, the term “deep learning” draws another contrast, that is, previous generations of machine learning lacked support of real data—in other words they were “shallow”.