☀️
Go back

Skills at Scale: Building Organizations That Truly Learn | Sandra Loughlin

68m 17s

Skills at Scale: Building Organizations That Truly Learn | Sandra Loughlin

For years, Dart doubted that companies could actually make skills the building blocks of work. They felt too abstract, too static, too disconnected from real daily work. But Sandra Loughlin proved that in some cases, skills can deliver real value. In this episode, Sandra explains why skills only matter in context, why stretch assignments drive real learning, and what it takes to build a true learning organization at scale.Dr. Sandra Loughlin is Chief Learning Scientist at EPAM Systems. She holds a PhD in educational psychology from the University of Maryland and previously taught and led learning...

Transcription

11567 Words, 63001 Characters

Learning is hard. Learning requires you to expend often significant efforts to do something that you're uncomfortable with because you don't already know how to do it. Real learning is not something that can be done to people. It has to be what people do for themselves. They need to build new knowledge on top of old knowledge and reflect and practice and get better. It's truly very, very difficult to do. Training, on the other hand, is very much what you do to people. You give them access to content, you put them through courses, you put them in a room, you talk at them. It's an input-based mentality and practice, whereas a lot of learning doesn't happen in any sort of formal context where someone is giving you information. Welcome to the Work for Humans podcast. This is Dart Lindsley. For many years, I've questioned whether companies can really use skills as the building blocks of work. Can you really map out what people can do, line it up with what the business needs, and actually make better decisions? I doubted it. It's a very complex thing to do, and in fact, when I debated the topic with Gareth Flynn in Stockholm earlier this year, I argued that skills-based models don't work. And I wasn't just playing devil's advocate, I meant it. And my guest today convinced me that they can, in some cases, provide real value. Sandra Laughlin is Chief Learning Scientist at EPAM Systems, a $5 billion global software engineering and professional services company with more than 60,000 employees. Sandra holds a PhD in the science of learning and previously taught and led transformational learning initiatives at the University of Maryland. At EPAM, she brings learning science and psychology into the company's unique, data-driven approach to people and work. In our conversation, Sandra explains why learning is different from training. And especially how EPAM's homegrown, unified data architecture and knowledge graphs make it possible to connect people and work at scale. We also talk about how the company balances data with employee agency. Why stretch assignments are essential for real learning and what it takes to keep a skills ontology fresh in a fast-changing world. Alright, don't forget to subscribe so you don't miss the next episode. And now, here's my conversation with Sandra Laughlin. Sandra Laughlin, welcome to Work for Humans. Thank you. I'm so happy to be here. We met in Stockholm and I was there in a debate with Gareth Flynn. And the debate was about how do skills and AI, can they work together to make companies work better? And my role was the bad guy. And I was up there to say how skills are never going to work and here's five reasons why. But the truth is, to a large extent, I have believed that. So I wasn't just pretending for the purpose of debate. I've come to really doubt skills. And you were the moderator. Then you gave a presentation about what you're doing at EPAM and what EPAM has been doing for a long time. And you showed me that it can work. And so most of my objections were eliminated. And that's what you want, right? That's why you go to places to learn things is to find out where you're wrong. And so that's what I want to talk about today. I want to talk about this issue from many, many different angles. So let me just start with your title is Chief Learning Scientist at EPAM. What is that? That is a great question. It is a very unusual title. And it was something that our CEO thought up. Because I'm not our chief learning officer. That's a separate role. What I do is different. And I used to be, I mean, I guess I still am an actual learning scientist. So my PhD is in the science of learning, the psychology of learning. And so he was looking for a title that was descriptive and he hit upon this one and it's kind of awesome. But it does allude to the fact that my role is non-traditional. So my job is both internally facing and externally facing. And it's very much focused on learning, but it's not just learning. It's on human psychology and getting people to do hard things. That can be our employees, that can be our clients, that can be our clients' employees. It's a lot of different elements. But it's essentially, it's a reflection of my area of expertise, which is how do you get people to do hard things, especially learning? Yes. I even want to part out what you just said. You said, get people to. And I think what you're saying is not teach people to and not force them to, but put them into a position such that they can do difficult things. Yes. And the reason that I like to frame it that way is because real learning, I'm going to distinguish between learning and training. And this could be a splitting of the hairs for some people, but to me, they are fundamentally different things. Learning is hard. Learning requires you to expend often significant efforts to do something that you're uncomfortable with because you don't already know how to do it. Real learning is not something that can be done to people. It has to be what people do for themselves. They need to build new knowledge on top of old knowledge and reflect and practice and get better. And that's truly very, very difficult to do. Training on the other hand, is very much what you do to people. You give them access to content. You put them through courses, you put them in a room, you talk at them. It's an input based mentality and practice. Whereas learning, a lot of learning doesn't happen in any sort of formal context where someone is giving you information. It often happens through reflection or me researching stuff or me observing things. Real learning is very hard. It's rather unusual. And I think it's categorically different from training, which is why I really emphasize the getting people to do because you try to lead the horse to water, so to speak. But you can never make that horse drink. This is very consistent with some of the ways that we frame things on the show, which is that first of all, some offerings of work are things that you can give people, pay, you can give people, they don't have to do very much to receive it. But the farther you go toward transformation offerings, the more you get into things that must be co-created. And the burden of that creation is going to be increasingly on the side of the people being transformed. I've often wondered this question. It seems to me that there are some things that we can learn by being taught. And there are some things you just can't. And I would imagine that learning scientists have thought through this. I remember when I was teaching fiction and poetry at the University of Colorado, you know, you come in as a practitioner, I didn't know how to teach it. I read a book and it said, writing can't be taught, but it can be learned. And so the job of a writing professor is to put people into a situation where they can learn it. And it seems like writing is absolutely one of those things. Sports are something you can't do by reading about it. But there might be things that you can. Do learning scientists have like a spectrum of teachability? Yes. And I would say, well, I cannot speak for the entire field, but I would say for myself and for those that I studied with and learned with, I think that you can teach people a lot of things. You can teach people, I think, largely anything. But teaching, again, is just one part of learning. So even with writing, with writing, you're going to become a faster, better writer. If you have a combination of some expert who knows more than you do, giving you some tips, but really giving you feedback. We think about teaching so much as the communication of whatever, shortcuts, concepts, methods, whatever. But really the most significant predictor of someone's learning, aside from what they already know, is feedback. And that feedback needs to come from the environment and or from someone who knows more than you do who are looking at your practice and saying, have you thought about this? Have you tried that? That went the wrong way. And so this concept of what have learning scientists determined can be taught? It's pretty much anything. But again, teaching is only a piece of the puzzle. And it's in some cases, a very small piece. Great. That reformed my mental model there. That was very useful. Before we start talking about your move into EPAM, what is EPAM? A lot of us don't know. EPAM is a large professional services organization, global, that focuses on very high-end software engineering, AI and data science. So what does this mean? It means that our core business is helping our clients build highly complex digital platforms. So if you think about stuff that you interact with on your computer or on your phone, these different systems, often EPAM has had a hand in building those. We don't build our own platforms to sell. We just help our clients build platforms that they sell or they use to run their businesses. And so when I say that we're large, we're about 60,000 people. We're headquartered in the US, we're publicly traded, about $5 billion. And we're in something like 50 countries. It's a quite a global organization, but it is not super well known simply because our ethos is we don't talk about ourselves, we just do good work. And that is how we grow the business. And a lot of the companies that people do know contract with EPAM to build complete solutions in many cases. So it's not like providing a software engineer to a company, it's a solutions company. And many of the name brand companies that you might think of have turned to EPAM for things that you use, probably. Yes. Yes. I will also say one thing that we also don't do a lot of is do any like keep the lights on work. So we have companies that will outsource IT to different providers, and they will just maintain stuff and migrate stuff and whatever. We do a little bit of that. But primarily, as you say, we're building product, core digital, very sophisticated product, almost always including a very significant data component and AI component. And I mean, old school AI, as well as the newer gen AI. But it all comes together in the products that we develop for our clients. So what were you doing just before joining EPAM? And what drew you to it? And what did you find when you got there? So just before joining EPAM, I was faculty at the University of Maryland. I got my PhD there. And after graduation, I decided to teach there. I actually taught and researched and also ran the Transformational Learning Office, which was basically a support function within the university, where we helped faculty teach better. We worked with employers to figure out what are the skills that they needed to hire for. We worked with faculty to evolve the curriculum to better meet the needs of employers and students. But it's all very much focused on transformation of the university itself, the teaching function and the learning functions in particular. And I was there for like five, seven years. And as part of my work there, the CEO of EPAM, who I had also never heard of, reached out to me and said, hey, would you mind doing some consulting work for us? So not working for EPAM, but just consulting to EPAM. And the reason that he reached out was because he had been hearing from clients for many years that EPAM engineers were just different. They just had more skills, they were more engaged, they were more productive, they just did qualitatively better work than many other competitors out there and often better than the client's own employees. So the clients kept saying to our executive team, what's your magic? What is your secret sauce? How can we create something like this in our own organizations? And the CEO, the executive team, frankly, they weren't sure because they were the founders of EPAM. They're the ones that have built the organization for the past 35 years, whatever it is. And they really only knew EPAM from an employee standpoint. They weren't entirely sure. And so they asked me to come in and take a look. And what I discovered was, no, it's like a little bit of a unicorn organization in that a lot of the things that I would teach in terms of highly productive, highly valuable, employee-centered human capital practices were actually done at EPAM, whereas I'd only ever really heard of them being done in theory. So it was learning here at EPAM is very different. The infrastructure that supports learning is very different. The employee practices, how we hire, how we promote, all of these things are very, very different. The experience is very different. And when I saw what these software engineers had built without the help, I'm going to be very clear, of any psychologists, any HR people, what they just built through business desire, intuition, and trial and error was so incredible that I thought, you know what, I want to go work for that company. And so that's when I came over to the organization. It was because I was so impressed and inspired by what they had done from a people perspective, not just learning, but especially learning. And I thought, you know what? I love higher education, but this is the new frontier and I want to be part of a company like that. And it's good to point out here, which you implied earlier, you don't sell any technology related to learning or skills or management. In fact, you don't sell products. And the reason I think that that's important is so often when you hear somebody say, we got skills to work, they're selling something, right? Especially when consultants say, oh no, this is the way to go. I'm always like, really? But here you were, and you came to this space and you found something quite different. And I think that's, we're going to spend a lot of our time talking about the nature of the difference. And I think that the object of our conversation is going to be essentially how EPAMP functions as a learning organization and other things. Yes. And to reiterate, yeah, we don't sell anything in this space. So the fact that I spend so much time talking about it is crazy because it is not our core business whatsoever. How important are skills to how EPAMP runs? Very. It is central to our business. From the outset, I want to say that it's not just skills that are important to us. Tell me more because I agree data on its own is nothing. Yes. Expand on what is beyond that. First of all, we collect data on everything and we always have. The data infrastructure at EPAMP is actually the secret sauce, but it's this generic orientation toward we need data to understand our business and improve our business outcomes. And so back when the company was founded on 1993, remember professional services, our business is matching people to work. What we sell is expertise and time. And so software engineers, data scientists, they wanted some data-driven way to make this match. And so they hit upon two data elements very early on to do that. And the first one was understanding the work itself. So taking a project from a client and breaking it down into the tasks that were required to achieve that objective for the project. So you knew like what you're trying to solve for. Then they would take those tasks, translate those into, okay, what are the skills required to do those tasks? And then who has those skills in our employee pool that can do it? So this is how they matched work and people by breaking down work and not breaking down people but understanding the attributes of people that would support that. This was the very, very beginning. And then of course, once you had those data, it turned out they were very useful for other things besides staffing. And so they use it for other things as well. But what we also learned over time is that while skills are central, very, very, very important, they are not the totality of what information we need to optimize work. We need to understand the skills of people, the goals of people. We need to understand the contexts in which they have deployed those skills in the industries with the clients in the very specific tasks. We need to know who is available. We need to know so many different things about people to help us understand them and to then optimize them for work and optimize work for them. And therefore, the data models that we have around people and work are very, very sophisticated. And again, it is not to say that skills are not important. We couldn't function as a company if we didn't have the skills data. But I want to just say from the outset, it is not just about skills. To me, this whole skills conversation should really just be described as data-driven decision making around people with skills being one element, but not the only one. I like that you mentioned contexts, too. We had Matt Bean on the show who studies learning, essentially how people learn in organizations and he's looked at how technology can slow down novices becoming experts for various reasons. But he said his definition of a skill is not the ability to hit the golf ball and not even the ability to hit the golf ball close to the hole. It's the ability to hit the golf ball close to the hole on a muddy slope in the rain at night. He said skills are contextual and they are much dirtier than book learning would lead you to believe. 100%. Yeah. So I was going to ask you about context. I'm glad you already addressed it, which is tricky because now you've got to have in your data model, you've got to have not just a whole set of skills, you have to have a whole set of contexts that people might have used those skills in, potentially countries, potentially businesses and that those skills are mapped in a person to the context in which they may have used them. OK, that's very sophisticated already, because even a skills taxonomy on its own is dang hard. Just a skills taxonomy is hard. And the fact that now you've got a data model that brings those together, that's really tricky. I think skills frameworks are hard because people often build them relative to nothing. And this is why I talk about tasks all the time. OK, a skill is going to only be evidenced when someone's performing a task. And very importantly, whatever that task is, is going to require more than one skill. It is a very complex and sophisticated way of thinking about stuff, but you have to do it that way. But if you're just trying to build skills taxonomies absent context, it's just a bunch of words that are largely meaningless because operationalizing those words is necessarily now talking about tasks in context. And so I've never attempted to build a skills framework absent work because I don't see the point. And I also can't imagine that it would be particularly useful. Yes. One of my arguments against skills is that performance is an emergent property of people and that knowing all of their ingredients is not predictive of their performance. And I will admit up front, I think it's a little bit of a straw man. And the reason I say that is one of the arguments I make is if it's all about skills, how is it that the U.S. Olympic basketball team could lose? They're clearly the best basketball players in the world. They're the all-star team from the United States where basketball is played the most. How could they possibly lose? The answer is they win nine times out of ten gold. So if I had that level of just getting to the ballpark of that is enough. That's why I say, is it fully predictive? No, it's not. Does it get us closer? Yeah. Probably. For sure. People often will have this, you know, they push back on skills as not being enough, which is true. My response is, what do we have now? It boggles the mind how we make such incredible, expensive decisions that affect people's lives based on, I don't know what, but it's not often data. And even when you have the data, it might not be very good data. And so I'm an empiricist. I believe that you can measure pretty much anything pretty well. And in theory, even if we had deeply sophisticated models around people looking at skills and I don't know, anything you can imagine, it's still a predictive model. It doesn't mean that it's going to be a hundred percent explanatory. It's not because there are factors outside of people that have a very significant impact on their performance. And there are factors inside people that are not systematic. So like when you're having a bad day, all kinds of things can happen where in a person's performance would not necessarily reflect what their skills would predict. But I don't think that we can attempt to make that our goal of a hundred percent explanatory power because A, it's going to be too expensive and it's not necessary to get to that level of sophistication and B, because you can never get there. There's just no way that you could account for all of those things, nor should you need to. But to me, the opposite of that, which is not looking at data, I think is equally probably actually worse because then you're basing in these incredible decisions on all kinds of things that are not data. Well, and just to make a distinction there, there's data that lives in databases and then there's data that lives in our heads about each other. Yes. But very partial because no one of us experiences the whole company, we only experience parts of it. So I want to, just for a little bit, get technical or geeky here. Let's talk about, first of all, the technical architecture. So one of the things that EPAM did from the very beginning is said, we're going to have one shared source of data and everything else is going to live on top of that. And so how important is that? And what does that even look like? How extensive is that shared data architecture? Because it could just be, this is the shared data architecture that supports the people practice. Or it could be, no, we have a shared data architecture that serves all parts of the company. And it's really just one thing. So what's going on there? So it's the second one. The way that I like to describe this is if you can imagine little boxes, little islands, each island or each box is a system that people have in the company, system of record or work. It could be your ATS, it could be JIRA, it could be your Salesforce, whatever it is. Companies have dozens or hundreds of these little boxes. And each one of those boxes is by definition, a data silo. It has its own database and it uses that database to do stuff. The way that companies get the data to connect today is that they draw little lines or build little bridges between the different boxes. And those are called integrations. And it gets very complicated and messy very quickly because you're just trying to build lines over everywhere. And invariably, those bridges or those intersections are very poorly designed, they don't go very deep, they don't go both ways, it creates a lot of mess. So from the beginning, EPAM decided to not deal with that. So instead, if I think about it conceptually, instead of arrows going or little lines going between all the little boxes, at EPAM, we have 130 something boxes as well, we have all these little data pieces. But all of the lines go up into one big enterprise data layer or platform, and it covers every system in the company, every system, so systems of record and systems of work. And in this data layer is where we have, for example, a single source of truth for skills. We have a single source of truth for tasks, which is where all of our AI and ML and automations occur at this level. And then on top of that, we have all of these interfaces where you can interact with the data across the different systems. And so this unified data model is the foundation of how EPAM operates as a business with skills being just one example of how that infrastructure creates value. And this is, I think, important for companies to understand that unified data model is also the foundation of how effectively we can use AI to support operations in the business. Because if you think about AI, it uses data as fuel. And so if you have data silos, you are necessarily limiting the ability of AI to support your business. And so because EPAM has all of its data in one place, it's a mesh, it isn't like a lake, it's a mesh, you still have pockets that they work together. We are able to leverage AI across the business to do all kinds of different things. And AI native companies, so companies that are being built in people's garages today, that are going to compete in every industry, are also going to be designed this way. Because data is extremely powerful, when it is accessible, and when you can use it to do things. And so companies that are built as AI natives, they're built as data natives, really. And this is the model that I think so many organizations need to understand. Because at least at EPAM, we believe that this is the future. And again, skills is one of the things that you can get out of it. But it is not even the most important one. We think that this is fundamental to business success in the future. And I would argue that absent that kind of integration, skills don't do anything, hardly anything. Correct. The number of times I've seen bespoke skills taxonomies spun up for specific use cases, but they don't talk to each other. So you're not gaining any value from scale by adding them. And then you buy a new out-of-box software, and it can't ingest the data, so you lose your history. I could just list 99 things that make the alternatives not work very well. But when I heard you speak in Stockholm, somebody asked, how can a company become more like EPAM? And you said, we are a terrible transformation story. You can't learn anything about transformation from us. And that's because we were born this way. And we didn't have to become this after the fact. And you built, what percentage of the applications that actually access this data? 99%, most of them. So 99%. So you built the data structure underneath, you build the apps that live on top of it. That's not something that companies can decide to do tomorrow if they have an installed base. No, and they shouldn't. And again, what AI natives will do, who are being billed today, they might do what EPAM has done. I don't know. But when we work with clients who are trying to now solve for this data integration problem after the fact, what we do is essentially we go use case by use case, and we build, let's say it's, you know, data that cross three different systems. We'll build a little data puddle, I think of it as a puddle, on top of those three different things to solve that use case. And then they want to solve this other use case. So we'll build another puddle over here. And you build puddle by puddle by puddle. And if they're architected in the right way where you're looking ahead, because they all have to be done very similarly, when you have enough puddles, you're going to get your lake, so to speak. So when we're doing this for clients, that's how we're approaching this. Our goal and the client's goal is to get to this enterprise data architecture and this kind of uniform platform or unified platform. But you don't just go there on day one, because you can't, you don't need to, and it's too complex, and you shouldn't even try. Instead, you go use case by use case, and you build what you need to build, but in the right way, so that it all connects over time. Hey, everyone, I want to let you know about some upcoming speaking events. If you happen to be in the Great Lakes area, on September 30th, I'm keynoting the HR Track at the UW-EBC 27th Annual Emerging Best Practices and Technology Conference in Madison, Wisconsin. The conference pulls in some fabulous speakers to discuss topics across all of business, not just HR. Also, in Oakland, California, September 17th, I'm going to be giving a keynote Also, in Oakland, California, September 17th and 18th, two of our past guests at Work for Humans will be speaking at the Responsive Conference. Brie Groff will be talking about her sparkling new book, Today Was Fun, and Simone Stolzoff will be talking about his next book. So, check it all out at responsive.org, use promo code 11FOLD, that's 1-1-F-O-L-D, to get a substantial discount. All right, hope to see you there. Still non-trivial, I mean, it was never trivial, but it's especially not trivial, which is, I one time built an ontology that was designed to cover concepts from finance, facilities, and HR, and to some extent, sales. So we were going to have one ontology to unify them all. And one of the things we discovered was that even the concept of time was actually quite different. Project managers see time differently than people in finance do. Finance think of time in quarters, they don't care about weekends, they don't care about holidays. You come over to project managers who are doing work and they see weekends, they see holidays, they think of person hours. So what we found was that we had to get pretty abstract. Part of what I would say is that the data layer had to be really smart in order to create a seamless experience for both people in finance and for people in project management or any other part. And so are the databases that are forming your lake, if it's a lake, it's not a puddle, ocean, your data ocean, are they relational databases or are they graphical? No, no, no, they're graphical databases. It's actually super cool. Our skills ontology, it doesn't stay still. If I go into it, it's like a bunch of webs and they all move around and I can drill into them and drill out of them and it's so complicated. But it has to be that way for exactly the reasons that you've articulated, because there are certain skills and whatever, things that people bring to the work that are similar, even if the words are the same, the actual practice is different. So again, this is why we also have all of this context around each one of those terms, both in how it's being applied in terms of a task and the context in which that task is occurring. And so I know enough about data to be dangerous. I would have to have somebody else come on to really explain this, but it is absolutely not a relational database. It is a knowledge graph and it's actually many knowledge graphs put together into whatever a super knowledge graph is. It's such a fantastic and complicated thing. Which takes a lot of skill to build in the first place, but is very flexible after the fact. An example is you don't have to have fixed hierarchies of skills that are three levels deep in a leaf and node structure. Instead, you just have concepts that are related to each other and you can change that relationship tomorrow and it can go infinitely deep. It's really much more powerful, but not easy. This is sort of a thing, which is you might need a company of software engineers to really pull that off. Yes. And what's interesting is that, again, I'm thinking about the world of AI. So I spent a lot of time thinking about what is this feature? And knowledge graphs are one of those super important things that companies are gonna have to build, not just related to skills and work, but to understand their own business. Because if AI is gonna help you operate your business, you actually need to have knowledge graphs around the different elements of the business so that there's a data foundation that makes sense to pull from. And so I think a lot of people are gonna become more familiar with knowledge graphs in the next few years because companies will have to develop them. Again, skills is one area, but really broadly, like to me, again, skills is just one concrete instantiation of a new way of operating as a business. And it's such an interesting use case because I think it's so important, but it is just one. If you can dig underneath to appreciate the concepts underlying a skills-based organization, I think you would have a really good idea of the future of work, period. Yeah, skills are just a great test case because they're one of the more complicated things out there because humans are one of the more complicated things out there. So let's talk about what this means to the business. The first question is, I'm still gonna tie it to skills because it's a nice anchor. Otherwise, we just talk about the whole business and I think we would get lost. But what use cases are supported using information about people that are stored in a smart way? Largely anything having to do with people. We use people data and work data. Again, it's people and work data. I have to always reiterate they go together. We use it to understand our workforce, to do workforce planning. We are looking out, trying to predict what is going to be the people needs for our business two years out. We need to start today to make decisions about getting to that point. So we need to understand it, where we think we're gonna go, understand what we have today in terms of people, and then start closing the delta through hiring, mobility, learning, whatever it is. And so we use it for that. And can I say on that one, I have a list of things that I think are usually a waste of time. One of them is workforce planning. And the biggest reason I think it's a waste of time is because routinely we make the plans, but the plans are not connected to the transactional services that are actually gonna enact the workforce plan. But because you have a consistent data architecture, you are able to share that information from workforce planning to the transactional services. And so that's like an example of something I would normally say, don't do that. But in your case, it's working. I would also say the thing about what we're trying to be in two years isn't something that's built by HR. This is derived from the business. It really is what you would imagine good workforce planning is. You're looking at a business strategy, you're looking at a location strategy, or like a regional strategy. You're looking at a competitive strategy, and you're turning that into, here's our goal. What does that mean from a people perspective? What do we have today? Okay, how do we close that gap? Let's go make it happen. I wanna emphasize the fact that it is not just the people organization doing this. I know a lot of companies, you get a budget, you solve for it. That's not how it works at E-PAMP. So anyway, we use skills for that. We use skills for hiring, obviously. We use skills for entrepreneurial mobility. We use skills for learning, for training, for education, development. We use skills for risk management, which is one of my favorite use case. We use skills for strategic retention decisions. We use it for everything. And staffing, of course, as I mentioned from the very beginning. We use it to support and inform every people-related decision that we make. I can't think of an example where we don't use skills. I'm sure there are some, but I would say virtually all decisions around people, skills are at least one of the factors that we are looking at. So this touches upon one of my concerns about anchoring too heavily on data about people. I'm saying this more and more broadly as we go along. It can be part of a mindset that treats people as inventory and that acts upon people as means to an ends for the company. And it can be very one-sided. So it can become very, the business is using this data to manipulate a workforce without concern for the workforce itself. Now at E-PAMP, one of the things I really heard when you were presenting in Stockholm is that it's very balanced in the sense that everybody in the company is looking at those skills and is using them for themselves. So what does that look like? Essentially what I'm saying is, how do we democratize? How do we make this something that employees can use to have agency in their career and have agency in terms of the kind of work they want to do? I'll answer on two levels. The first level is very macro. Again, E-PAMP professional services. What we sell is the time of experts. The other thing to know is that our people go back and forth to companies like Google and Facebook. They are extremely sophisticated and extremely in-demand individuals. And therefore, as a business, it is central to our success as a business to find and keep great people in our company and great people want to have a lot of things, including a lot of transparency and agency. So from our business, foundational business, right? A big part of what we do is that we keep good people happy. So that's the big picture. In the smaller picture, how does that look in practice? It is exactly as you said. We use people data to optimize the employee experience and actually to make a lot more of career mobility and navigation through the organization, self-service and informed. So here's one of my favorite examples. In most companies, people are promoted when their manager lets them or pushes them forward or whatever it is. But the manager can be a gatekeeper. And there's a lot of talent hoarding in organizations where people are not allowed to move into other roles, either horizontally or vertically into a new, more senior role. As long as E-PAMP has been around, as far as I know, so forever, we have not had that approach. So the way that it works at E-PAMP, if I wanted to be promoted, I don't talk to my manager, I should, but I don't necessarily need to. Instead, in the system, I say, I am ready for promotion. And the system will automatically pull a random selection of peers in the role that I want to be promoted into who have been trained as assessors. And they basically put me through performance-based interview process. They look at my certifications. They look at my skills. They interview me. They sometimes will give me performance tests, but they're the ones who are essentially evaluating whether I am ready to be in this new role. And then they'll make a determination of, yes, she's ready, but she needs to shore up these couple of things. Go do that quickly. Or no, she's not ready. Come back in six months. So they're the assessment committee. They send their recommendation back to my manager. Now, my manager, in theory, can veto this, but my manager would have to justify that veto to our executive team and explain why my manager's desire to keep me in my current role should trump this random selection of peers who have determined that I am actually a good fit for this other role. This is one example of the way in which we're trying to use our data for good, for the optimization of employees. Another example is we have this big mobility culture because we're project-based. We actually want people to be on projects that they are excited about and they're enjoying and they're skilled for. We also know that people can get stuck on projects for a really long time and they get bored and tired, even if they had the right skills, like they're just done. And so we have a big culture where we're tracking a lot of that information and we're actually not just allowing people to go and say, I wanna go somewhere else, which we do, but actually proactively push opportunities to people where I'll get an email saying, Sandra, you've been in this project for six months. Have you considered these other projects that require similar skills are in geographies that I've set or desirable to me? We're trying to use the data to optimize humans for work and work for humans. That's your podcast name. Hey, check that out. No accident, by the way. No, as I was saying it, I realized. And it is just foundational to how we work as a company. And that's why largely I think that we've been successful because we're not just using the data to, as you describe it, manipulate people. We're using it to optimize people, but by doing so, we're actually optimizing our business outcomes. From your perspective about how learning happens, it seems like the only way that people can or are going to be able to learn is by actually doing the work that they don't know how to do. In other words, if I already know how to do it, then I may not learn anything by doing it again. But the way that learning happens is not entirely training, it is largely by doing. And so it seems to me that there must always be this little bit of a vacuum, a little bit of a space between the skills I know how to do and the tasks that I'm going to be doing, and that that's necessary. Yeah, 100%. If you already had the skills or if you already could do it, you would be doing it. And so learning is necessarily closing a gap between what I can do today and what I want to be or need to be able to do tomorrow. By putting me in a situation where I may be doing things that I don't already know how to do. And so I'm not matching people with the perfect match for what they already know how to do. I'm matching people with what they probably, I'm guessing here, what is within the realm of possibility for learning during the life of the project or something like that. Yes, like the stretch assignment. We have this concept. And again, there's a data element to this where we look at actually skills in terms of skill families, because what we've learned is that skills cluster together naturally. And we know that there is an optimal level of challenge from a learning perspective. If I go into a project or a role and I only know how to do 12% of it, I'm going to fail. I'm going to be overwhelmed. It's just not going to work. So there is an, it's called the zone of proximal development. It is how much it could be new for me to actually be successful in learning as I go. Now, the complexity here is that that zone of proximal development is not standard or uniform, even for a single person, because it will depend on a whole lot of factors around my motivation, around how much time I'm willing to put into it. I guess a lot of other things that go into it. But basically, when we're looking at placing people in such assignments, which is deliberate, we're doing it on purpose, we will guess at a percent of difference that it's going to be okay. But we'll also check in with both the person and the project team to say, are we willing to do this? Because there are going to be some projects where, frankly, there's too much going on with the client. Whatever it is, we can't have a bunch of learners on that project because it's too critical or time sensitive or whatever. And then there are projects where we have a lot more tolerance for people messing and learning as they go. And that's why a lot of our internal projects, so projects work where EPAM is the client, we're doing stuff for the company. Those are the best training grounds because they're real. We're a real company, we have real challenges, we have real opportunities. But we have more tolerance for people who are learning and trying and growing up and reflecting and getting better. And we build that into the way that we think about learning. That is actually calculated for us as a learning experience, as opposed to just a delivery opportunity. I want to touch briefly on ontology management, because you do it a little differently. And it's one of those things that are sort of like back office, not that much fun. But how does EPAM keep the taxonomy fresh? And how do you manage to keep the ontology fresh? How do you manage to keep it up to date? I mean, especially in the last two years, how many new, completely new things have we had to learn how to learn? Yes. Okay. So this goes to governance of our skills infrastructure. Governance of our skills infrastructure at EPAM is owned by the business, which means that it's owned by the cloud practice, for example, owns all of the skills related to cloud. This is because they're the experts, they actually know what skills are required. They know what those skills look like in practice, they know what tasks are related to those, they understand the work. And therefore, they are the ones who are tasked with keeping that up to date. The nitty gritty of it is that there are actual owners, people who part of their job, it is to own this, it's multiple people, but they're constantly crowdsourcing perspectives from other people in the practice inside EPAM and outside EPAM. And it's important, they're not just looking at what we need today, they're also part of their job is looking ahead, what do we think we're going to need to be able to predict that. So they're the ones who are responsible for it. And you can't constantly change it every week, you can't change it, because it's constantly changing, then you can't actually align anything to it, like you can't teach people against it, it's always a moving target. And so there's a periodicity to our governance approach. So minimally, every two years, and because of the role, but every two years, people will look at the skills and update them. For some roles, it happens a lot more frequently. In the past, the way that we would determine that it's just by guessing, we would say, this seems to be moving as a field, let's come back in six months. What we are doing now, not universally, but we're moving toward it is actually having an algorithm that lets us know what percent of skills have changed, what is the velocity of change, and then like an alert will go out. And it will say, okay, we have to look at this whole thing again. And so it will actually bring the team together to do that. But it's data informed. So we're looking at all these data sources. But humans are the ones that look at it and decide and lock down the ontology for a period of time until it gets revised. The footprint of the ontology, does it or should it have items in it that are not important to the business? And the reason I'm asking that is I'm asking whether or not it allows for things that are important to people in the workforce, who may actually have a perspective on what's needed, or what they want to learn. And I'm wondering if it extends beyond the known requirements of the company. Yes and no. So it extends beyond the known requirements, because to me, that phrase assumes requirements today, as opposed to what we predict we're going to need in the next couple of years. It does not include stuff that people just are good at. The classic example is underwater basket weaving. I could be amazing at that. It is not going to be in our skills ontology. And there's a reason for this. The reason is, it costs money to collect skills data. And so if that skill is in our ontology, remember, it's not self-report. It isn't just me saying I'm good at underwater basket weaving. The way that our models work is that we need to have verified skills. Someone's going to have to evaluate whether I am in fact good at underwater basket weaving. And that does not serve us as a business. So that's why we don't do it. Now, there are pros and cons to that model. The con is, let's say there's something that's important that no one really appreciates today. But then we realize, oh shoot, that actually is important. But we have zero information about it, because we've never actually tracked it. There is an absolute trade-off to how we have decided to do this. I don't necessarily think that our way is the right way, but I don't think it's a wrong way. I think it's a choice that companies will need to make. And I also think there's going to be a gray area of stuff that you think you might need, but you're not really sure, as opposed to just tracking whatever employees want to track, whether or not it's useful. Because there's just some things like underwater basket weaving, that unless things change dramatically in our world, are never going to be important to you. Yeah, there's a category that's worth exploring, which is things employees know the company needs, and the company itself doesn't know. In other words, the central services that would define an ontology. So the patent that I wrote for Cisco Systems on ontology management was about crowdsourced ontology management. So the idea behind crowdsourced ontologies is that deep specialists know more than generalists at any level. And so I felt that it was needed, in particular, because many of the skills that we need to know are invented by somebody in the company. And that's true, I'm sure, for EPAM, too, but in the companies that I've worked for. So we just had a show where we talked about Dmitri Glazkov, and he built something called Breadboard. Well, he invented it just like four months ago. Not four months, it was a year ago. And now it's a skill that somebody could have, but it didn't exist before he invented it. And so I worry a little bit about that edge. So let me be clear. Remember, for EPAM, the ontology is owned by experts, and it is crowdsourced. What you're talking about, which is what we do, I think is incredibly important because, as you said, no central authority is going to know this. This is exactly right. This is why we don't do that. It's why we have skills owned by the practices who are crowdsourcing internally and externally. What are the skills required? That's different than Sandra saying, I have a basket weaving skill, which is not important. I like that. It is crowdsourced. It's not just a wide open crowdsourcing where every individual can contribute. It is instead teams of experts in local areas who are actively conducting the crowdsourcing of what's coming. And so you have the ability to crowdsource internally to keep it up to date and keep it accurate. And I was thinking, even the cloud, there's just so much depth to the work that we do. Can we talk for just a second about skills validation and how you start to know that somebody might have a skill and how you verify over time that they do? Yes, this is a super, super important issue that a lot of people don't think about. So skills are latent constructs. You cannot measure them directly. And even if somebody had a skill at a certain time point, that can wane or they can have it in one context, but not another. So there's a lot of nuance around whether or not someone has a skill. So at EPAM, we have many, many sources of data that we pull together to determine if somebody has a skill. And we use both inference and verification. So we will infer skills from a ton of things. We will look at someone's LinkedIn profile, for example. We will infer skills from whether someone has presented or what they presented on externally. So we'll scrape external data. We will infer skills from what activities you've done in our LMS. Like if you've clicked through some courses, we're not going to assume that you know it, but we kind of have an idea that you might sort of know it. So we're going to infer a little bit there. We will infer from your network. If you interact with a whole bunch of people regularly and in a deep way who have a certain skill, we're going to infer that you might also have that skill. We will also infer it from skill families. So if you have skills A and B and D, we'll assume that you have skill C. Those are all inference. And at EPAM, each person has a whole skills profile. Those will show up as gray. They're on there, but we're not sure if you have them. We need to verify. The verification of skills happens primarily through experts who know what they're looking at, looking at your work, and telling us if you have the skills. So this can come in a lot of different forms. It can be where your project manager, the person that's actually seeing your work on a day-to-day basis, is looking at your work and saying, yes, I've seen evidence of that skill in this project at this time point, at this level of sophistication. We'll get verified skills data from certifications, exams externally. We'll get that from asking your peers or our clients to give us feedback on your skills. We're going to ask you to give us feedback on your skills. We're going to get verification from a bunch of different sources, but the most critical ones, the ones that we weight the most, actually weight in our model, the most heavily are experts giving you feedback that is context-specific and time-boxed. And in that way, we have a decent, although imperfect, but still very good skills model for every person. Now, let's say I had a skill today, and I'm demonstrating that today. If I don't demonstrate that skill for three years, even though it was true today, three years from now, that won't appear on my profile anymore, because there's a shelf life. People don't know if I have that skill, but they haven't seen evidence of it. We also are very concerned about putting the skills data in context. So I can have a skill in this context, but I don't necessarily have it in that other context, or there's no evidence of that. Anyway, it is a very complex data model, but it is very, very rich and very much oriented toward high-quality data that we're confident is true. Do different skills have different shelf lives? An example is, if I'm a native speaker of Mandarin, it doesn't matter if I don't do it for three years, I'm still a native speaker of Mandarin. Yes, for sure. We do have that. Soft skills tend to have a longer shelf life, but they are also highly context-specific or dependent. Yes. Technical skills have a shorter lifespan because the technology keeps changing. I'm certain that our data model is not perfect for that. Not trying to be perfect, just trying to get good enough, right? Yes. We're decently good at it. We can always be better. But again, for us, even if it's a soft skill, let's say, what we prioritize is seeing evidence of it. And that's why we have so many data points from so many sources, because we're hoping that we're going to see evidence of that thing a lot. So yes, that soft skill will stay in my profile for longer, or expertise areas, learning science, that's going to be on my profile for hopefully a really long time. we would still drop it from a weight perspective if we're not seeing evidence of it. What don't you love about how you're operating? I don't love, although I do understand the business rationale for, the fact that we don't apply the skills power uniformly across the organization. So all the things that I'm talking about that we do, we do for a larger segment of our employee base, but not the whole thing. We do this for client facing technical roles primarily. And the reason for this is that it's expensive. It's expensive to collect data and it's expensive to use data. It's also a very dynamic task space. Exactly. It's a very dynamic set of work that you have to do. Yes. More dynamic than maybe people in the finance department who are gonna do the same thing for 20 years. Correct. It's changes faster, but it's also the ROI is higher. It's higher because if we get this wrong, if we're putting the wrong people in front of our clients, our business will suffer. The way that I think about it is, these are roles that are closest to P&L for us. And that's why we invest in this infrastructure. So our HR business partners or our finance people, we do use skills in their employee journeys, but the data aren't as good. Therefore we don't rely on it as much. Therefore we are making decisions based on however people make decisions without data, right, whatever that is. And so I wish that we had this infrastructure more uniformly applicable, but we don't. And in a perfect world, we would. I have a few closing questions to ask everybody. And I don't know if you've listened to the end of one of my shows to hear these closing questions. One of them is, what do you, Sondra, hire your job to do for you? EPAM hires you to do something for it. What do you hire EPAM to do for you? Give me stuff that I don't already know how to do. Put me in positions where I need to learn stuff that creates demonstrable value for people I can see and talk to. Give me a business problem and then trust me to go and solve it and provide evidence to you that I've done that. You want to solve puzzles? You want them to be hard, but not impossible? You want to learn along the way? And you want to see the effect that you are having when you help others? Yes. It's not enough to throw it off into space and assume that you're helping others. You actually want to see it land and you want to have that experience of a detectable difference that you're making. Is it important to you at all that you're uniquely qualified to do it? In other words, not just something that can be solved, but maybe something only I can solve. I've never asked that question before. That's a new question. Part of me would love for that to be the case, but the fact is that because I very much want the solution to happen for the people that we're trying to help, whether that's employees or clients or whomever, it's actually bad if it's just me, because I get hit by a bus. And so I optimally want challenges where my expertise is necessary, but it is on me to build that expertise in others so that they can carry the work forward. So it doesn't all fall to me, because if it did, I could run out of capacity or I could be too stretched thin or I could just not be available. And you want your value to be seen. Yeah. I think I heard that in there, which is that creating value in such a way that other people can see the value that I'm creating is a part of it. Yes, I love when people, clients or internal people come to me with this, say this thing, and I say, well, have you thought about this? And they're like, oh, I love that idea. That is Sandra Fuel, where it makes me so happy and feel so fulfilled for other people to see that as a special, exciting way to solve their problems. What does your job cost you? This is not unique to E-PAM. This is more of a me problem. I love working. I love working. And in every job that I've had, I have done too much. That is just how I have defaulted from very beginning of my career. So I don't know that it's E-PAM that's doing this to me, but I'm doing this to me. And the cost is health, mental health and time, time with people that matter to me. And so it is one of my constant struggles to balance the need, the drive that I have to solve problems and be creative and be seen and have all these exciting moments happen with needing to be around for my family for a really long time and in the short term, both physically and emotionally. It made me think that there's a risk to providing work that people find extraordinarily rewarding because it's like an addictive drug and it can be bad for them. But it's also true that horrible work can be bad for you. And so what the heck, what are you gonna do? Work that's miserable and is making you really unhappy is not great for the family either. No. I read, I think it was in the Atlantic, this was years ago, probably 10, 15 years ago, asking about why do people work so much? And normally, I don't know what all the hypotheses are, but the one that really struck me was, well, sometimes they just like what they do. And I remember reading that and feeling a little bit seen, but also recognizing that there is some degree of addiction. It's addicting to have success and have people recognize that it feels good. And also there's a lot of psychology associated, I think, with people who work too much, even because they want to. And I don't know that it's any healthier. Well, I suspect that it's long-term not any healthier than people working in jobs that they don't like. The difference is when you feel the impact, whether it's today or 10 years from now. Do you solve puzzles in your spare time? I do. Which ones? I'll do Sudoku, I'll do the New York Times puzzles. I recently discovered the LinkedIn, I do them every month. When I wake up, that is the first thing I do is I will like be half awake and I open up LinkedIn, open up the New York Times, and I just solve all the puzzles. I really enjoy that. Yeah, exactly. And in fact, many of the people that I've interviewed who hire their jobs to give them interesting puzzles to solve, you ask them that question, they say exactly what you say. One of the people said, yeah, I like to do the GRE logic tests over breakfast. I have a test book and now you're going to do it. I love that idea. Yes, I did LSAT test prep and it was exactly that. It was all the logic puzzles. They're so hard and they're so funny. It feels so powerful once you've solved them. Yes, well, we know what to get you for Christmas. So it's gonna be the test prep booklets. Well, thank you very much for coming on the show today. Where can people learn more about you and where can people learn more about EPAM? EPAM is epam.com. To learn more about me, in the past year and a half, I have been posting a lot on LinkedIn, talking about skills and data and AI, because I think this is the way the world is going to go, really. And it is way harder and way more nuanced than I think consulting houses and technology firms will tell you. And because EPAM lives and breathes this, we can tell you at least how we do it. We can't give you the transformation story. We have a point of view on that, but it's not like we have experienced that. But we believe that this is the right thing for employees. It's the right thing for the business. And so we're out there talking about that. I'm out there talking about that. And I do it a lot on LinkedIn. Yes, it's a really good resource. One of the things you have on your LinkedIn page is a complete summary of all the links to your LinkedIn posts. And it's a great resource. And each one of them is quite bite-sized. At some point, you should put them all into a single database and have AI summarize it for you, because I think it's a book. Okay. So, but I didn't make it. Some guy made this for me, and he was like, hey, would it be okay if I made an AI version of you? He was like, would that be weird? And I was like, no, I don't think so. But it's on there, and he will load up everything in there. And I love talking to it. Like, when I have to write an article, I first asked Sandra GPT to like draft it for me. And I'm like, yes, this is so much easier than doing it from scratch every single time. I agree so much with this AI. This AI is brilliant. I love this AI. I understand. All right. That's great talking. Yes. That was great talking. Thank you. Thank you. This was great. I appreciate it. Thanks for joining me for another episode of Work for Humans. If you enjoyed this episode, please give us a five-star rating wherever you listen to podcasts, and share the show with one person you think would get value from it. Believe it or not, this really helps us grow the show and reach more people who want to build the kind of work that people really want. As always, thank you to my producer, Jason Ames, at 9th Path Audio for his insights into content and his high standard for quality. Final note, the opinions shared here are my own and not the views of Google or Cisco systems. Thanks again for listening. See you next time.

Key Points:

  1. Learning is distinct from training, emphasizing the importance of people actively engaging in building new knowledge.
  2. EPAM Systems is a global software engineering and professional services company with a unique approach to learning and work.
  3. EPAM's data-driven model focuses on understanding skills and contexts to optimize work for individuals.

Summary:

The transcription discusses the difference between learning and training, highlighting the importance of individuals actively engaging in the learning process to build new knowledge. It introduces EPAM Systems, a global company known for its unique approach to learning and work, led by Chief Learning Scientist Sandra Laughlin. The conversation delves into EPAM's data-driven model, emphasizing the significance of understanding not only skills but also contexts to optimize work for individuals effectively. The data infrastructure at EPAM plays a crucial role in matching people to work by breaking down tasks and understanding the skills required. Skills are essential but not the sole focus, as the company's sophisticated data models also consider various contextual factors. The discussion underscores the complexity of skills frameworks and the importance of integrating context to predict performance accurately.

FAQs

Learning requires significant effort and is self-driven, involving building new knowledge and reflection. Training, on the other hand, is more input-based and directed at individuals.

EPAM is a global professional services company specializing in high-end software engineering, AI, and data science. They help clients build complex digital platforms and do not sell their own products.

Skills are central to EPAM's business as they match people with work. However, beyond skills, understanding contexts, goals, and other data about individuals is crucial for optimizing work.

EPAM emphasizes a holistic approach to data-driven decision making by considering various elements beyond just skills. They focus on a sophisticated data model that includes skills, contexts, and other factors.

Context is crucial because skills are contextual and complex. At EPAM, skills are mapped to contexts where individuals have used them, making the data model sophisticated and effective in optimizing work.

Chat with AI

Ask up to 5 questions based on this transcript.

No messages yet. Ask your first question about the episode.