Photo by Luca Bravo on Unsplash

Avoiding Tech Failure: 3 Questions For Success In Sports

Ken Vick

--

We’re in an age of Big Data and technology. From the device on your wrist to the world of business, and into the realm of sports.

At the elite levels of sports, there’s a lot on the line. Organizations, coaches, and high-performance teams want every advantage possible. This means trying to stay on the cutting edge of technology.

The idea that we can better measure, quantify, and analyze data to gain an advantage is enticing. However, we constantly see it fail in the sports world.

Although there are shining examples of data success, there is also a multitude of expensive failures.

From the failures, there are lessons we can learn. Over the last few decades, I’ve identified a few key steps to having a better shot at success.

We can’t get left behind

I was sitting there having coffee with the general manager of a professional sports team. In recent years they had been successful. Going deep into the playoffs or even winning the championship. I was sitting there to discuss technology and how it could help the team.

As we started having the discussion, I soon realized I wasn’t sure what they were trying to achieve. The GM wanted to begin implementing sports science and the use of technology. But from what I was hearing, the motivation was because other teams were doing it. “We can’t get left behind,” he said. “Other teams are doing this and we need to do it also.”

So as we continued the conversation, I tried to explore. What did that mean to him? How would it actually help the team? What would success look like?

And as we kept going through that conversation, what became very clear to me, was that the organization was very unclear about why it wanted to do this.

Did they think it would help contribute to more wins during the season or playoffs? No, they were already getting there.

Did they think it would help reduce their number of games lost to injury? No, they had been really good the last couple of years.

Did they think it could help them better develop prospects and younger players? No, they thought that was going well and weren’t worried about it.

The bottom line was they were chasing a trend. Other teams are using sports science and it was happening in other sports. Therefore, they needed to do it also.

In and of itself this isn’t an entirely horrible answer. It’s reasonable to look at your industry and watch trends. Seeing new technology and questioning whether you are falling behind makes sense. Weighing whether you need to adopt it as well is reasonable.

However, you also need to define the questions you are trying to answer. And that’s what was missing as I finished my coffee.

They were starting with the decision that they need to do something. They had skipped the stage of asking questions of whether or not they did. Of defining what they wanted to get from it.

You may think this is a unique example. That a professional sports organization would rarely be so haphazard in their approach. Unfortunately, I can tell you it’s quite common in all types of elite sport.

Start with a question

It’s a fundamental tenet of science. In research, the first step is to have a hypothesis. Fundamentally, a question that you’re trying to answer.

This is also the key in sports science, technology, and data analysis. In the world of high-performance sport, you must start with a question.

As I was going through the conversation with the GM, I realized they didn’t have a question. They had already come up with the answer of we must keep up. We need to do something. But I’ve been through this enough times to understand that without a question, you can’t decide if a program is working or not. You have nothing to answer and no outcomes to point to when evaluating if it was successful.

So, I began to share some potential questions they could look to ask. In my mind, the first question should have been; do we need technology? Explore if, how, and where it could help to improve the organization?

Another question they could have explored was what they were doing now that was working. The reality on most pro teams is that coaches change, players move on, and they don’t stay on top forever.

Why not learn and gather a better picture of what they were actually doing now?

That in itself would have been a great starting point.

In my ideal world, an organization begins by gathering data and looks at it retrospectively. They seek to identify trends and key performance indicators. They learn what data relates to what outcomes. When things worked how did the data look? When they didn’t, what data related to that?

My ideal first step. Is to collect data so you can go back a learn about what happened. Trust your coaches to do their job, then go back to try to find some of the whys that might not have been obvious.

In my worst scenario, a team starts collecting some data and trying to make decisions immediately. There was an all-too-familiar scenario we saw over the last decade with GPS tracking. Teams were adopting technology for player tracking with GPS / accelerometers. Then they start looking at speeds or workloads and quickly making decisions on things like;
- Are we doing enough training?
- Do we need more rest?
- Which players are “overtraining”?
- Who is working the “hardest”?
-Who isn’t working “hard enough”?

Without a period of time to match data to what you already see, it’s a fool’s game. To suddenly trust new data and interpretations to make the complex decisions coaches have been navigating for decades is foolhardy.

Another all-too-common scenario is adopting a technology that measures a player’s Readiness or recovery. A device that gives them a reading of green, yellow, and red and tells them their days’ training capabilities.

Coaches and athletes who start using something like this and changing their training based on unproven and out-of-context data are asking for problems.

The first step is to formulate the questions you want to answer. Once that has been done you can move on to the three questions we ask before implanting any new technology or measurements in sports settings.

Choosing New Technologies

Coaches, therapists, and teams are constantly inundated with new technologies to consider using. Companies are often happy to throw things at them to get an endorsement. Tech from other sports or areas of health and human performance needs to be reviewed.

For me, it became standard to make sure we could always answer three questions before we proposed introducing new technology into our process.

Photo by William Warby on Unsplash

1. So What?

Once you have questions you are trying to answer, then you try to figure out what matters.
Is it really important to performance, long-term development, or reducing the risk of injury?

That’s where you have to start. Does what we want to measure matter?

Let’s say we are exploring how to train more efficiently to increase an athlete’s change of direction abilities. If you say you want to look at the rate of force development, there is some logical potential. If you propose measuring VO2 max I’m probably going to give you a strange look.

If you want to measure something, you have to make sure it matters.

Spending time, mental energy, resources, coaches, or players’ time, on something that doesn’t have an impact will quickly cost you any trust in your program. In High-Performance sport it is also likely to cost you your job. So, you must make sure what you measure is important

While that sounds obvious. It’s often not. Our own biases and interests can take us down a rabbit hole, and we forget the main purpose. This is my ITSS principle.

It’s The Sport, Stupid.

Don’t forget that. You have to keep bringing every technology, every test, and every measurement you want to do back to this question. Does it have a significant impact on what you want?

In the end, whatever you measure must be meaningful. Just don’t forget, everything that’s meaningful, might not be measured.

Photo by Fonsi Fernández on Unsplash

2. Now What?

Once it is decided you have something meaningful to measure, then you have to ask the next question. What are you going to do with the data?

While as professionals we may be interested to explore research questions, athletes aren’t too excited to be lab rats. You are asking them to put their best effort or honesty into data collection, If that’s the case, then they better see that you are using the data to help them.

Don’t underestimate the importance here. If athletes don’t understand why, and/or see that their responses change something, they will stop giving honest effort or feedback. Athletes will quickly learn to game the system for their benefit if they need to.

I learned an important lesson as a young coach over 20 years ago.

I remember handing an inch and a half thick report on all the testing we just did with the team. We put the team through 2 days of testing. Performance tests, radar speed data, force plate testing, Wingate tests, VO2 max, and more.

I crunched numbers through the night, completed the analysis, formatted reports. It was an impressive sight. This masterpiece of sports performance testing was there to help inform our training.

The coach practically dropped it in the trash and said it didn’t matter. We put the players through all that and the info would never be used to better train or prevent injury.

I learned a lesson there. After you measure something with your athletes, you have to answer the question; now what?

Is the information actionable? If we measure something like height or foot size, that might have some relevance to the sport, but if I can’t change it, and it doesn’t change the programming, then it’s not actionable.

When you measure something, there needs to be something you can do to change it, or it informs your decision in changing something else.

To be actionable is also about philosophy, culture, and environment. The data and analysis in that report I did, had many things that were actionable. We could have made changes to individual training plans based on it.

But that wasn’t how the head coach wanted the program to work. Neither did the head strength coach. We were going to run the program the same way for the players and the data didn’t change that. The answer to Now what, was; nothing.

This can also occur when there may not be enough time allotted for the intervention. Maybe that’s because of travel, schedule, or the head coach’s practice plan.

You may also have a culture in a team setting of doing things a certain way, or resistance to other methods.

Any of these things can derail implementing the new technology because after you get the data and analysis, you aren’t able to act.

When adding technology, you need both the information and the opportunity to make it actionable.

Photo by FitNish Media on Unsplash

3. With What?

This final question can also get overlooked. When we have something meaningful to measure and it is actionable, how are we going to measure it? If the answer isn’t possible, practical, or accurate, then the answer is still no.

For example, years ago several coaches were talking during a break at an international sports industry conference. The strength coach for an NFL team was talking about how they were adding GPS tracking for all the players this coming year.

He was quite excited about them leading the way in the NFL and how it was going to help them.

However, as we continued to talk, a few of us saw a looming challenge. As people continued asking him questions, we began glancing at each other, realizing he didn’t see the problems he was setting up for himself.

They would have 70+ units to deploy for the players in training camp and then every player, all season long. On questioning, he shared they were adding an intern to help.

Several of us who had experience cautioned him that the deployment, collecting, and charging daily was no small feat. Pair that with data analysis and quick reporting for coaches and there was potential for things to fall apart quickly.

That’s an example of not having all the resources to implement a new technology successfully. Not only did the organization have to purchase the system and licenses, but they also needed to employ the staff to do it effectively. They needed to dedicate time to do it.

In other cases, we may not be able to afford the right equipment or enough units to use it in a practical manner. If I have one force plate but want to make it part of the feedback in jumping drills and strength training, I might be creating a bottleneck in my training flow.

There is also the problem of valid and reliable data. Even if you’ve answered the first two questions successfully, you may find that the technology being offered isn’t up to the task.

More and more, in recent years when people looked to answer the With What question, the answer has been lacking. That’s because more companies are trying to use data to measure things and create a product they can sell. Often the packaging and user interface are much more important than the data quality.

We have elite athletes and teams using technologies today that are not valid, have poor precision, and aren’t even reliable. These commercial products aiming to measure sleep or recovery status may be fine for behavior change in the general public, but they don’t cut it for making a decision in high-performance sport.

Adding sports technology successfully

To implement technology successfully in a sports setting you need to start with a question. A question of what you want to achieve with that technology. A question that all the stakeholders in the organization can get behind.

Then you have to be sure you can answer the three questions.

So what?
Now what?
With what?

When you have technology and measures that are meaningful, actionable, and practical, then you can consider adding the technology to your toolkit.

Skip these and you might be in for a rough road.

--

--

Ken Vick
Ken Vick

Written by Ken Vick

Ken is President & High-Performance Director @ VSP Global Systems. A creative problem solver supporting athletes & organizations pursuing their best performance

No responses yet