Learning platforms I: Problems

In recent years, me and my wife spent a lot of time teaching programming. Our students had different backgrounds and skills, as well as different goals of learning. The first pain point most of them immediately reported was the lack of practical content. And you don't need to be an experienced mentor to notice that as Quora's, Reddit's, HackerNews' threads are full of questions like "Where to find coding tasks for beginners?" or "Please suggest a resource to practise programming".

So what's a problem? – you may ask. Aren't there a lot of practice-it sites out there? Aren't those people just lazy and self-justifying to avoid DOING things?

Actually, no. And the goals of this series of posts are to explain:

  1. Where the existing platforms fail to fulfill their promises.
  2. What are the reasons behind it.
  3. How we are going to change that at Paqmind.

The target audience are people interested in self-learning (not necessary STEM), as well as people running their own online companies related to education.

Problems

Successful learning is based on iterations of theory and practice. Today you get theory from books or videos and there is no shortage of great examples of both. For practice, however, the situation is much worse. Few books or platforms conclude their chapters with exercises, which I personally find strange. I can only guess their reasoning:

Exercises are ineffective without answers and the presence of those will make readers cheat. If it fails anyway – why bother ourselves adding them?

So, coming to practical part, you'll be out of a fortune and inevitably spend a lot of time in search of something, at least partially useful. Let's look at the main pitfalls waiting for you on this way.

1. The Lack of Exercises

A few years ago I left Python ecosystem where I had a successful commercial career. I had been realizing async was the future and NodeJS had been miles ahead of Twisted / Tornado alternatives. Of course, soon I wanted to practise and the resource proposed most often was NodeSchool. I solved the first 13 challenges and came back for the second part, but couldn't find it.

Did I miss something? – I thought. Where is the content?! It's called "NodeSchool" so I should be blind seeing only the first lesson.

But that was all they had. Those 13 exercises I finished within approximately an hour and left with the feelings of disappointment and bewilderment. I knew that I had to solve something closer to 130 exercises to be able to call myself a Node beginner. And I was far from a webdev newbie who would require much more than that.

Has everything changed since then? Not much. They just added a bunch of similarly shallow framework "courses" and so it is. Like no-one ever asked for more. The same can be said about CodeSchool and Udemy giving you a few challenges after a topic deserving a few dozens.

An important remark: I don't want to bash any particular platform here. I have to name some titles to avoid hand-waving and fake PC. I don't consider Paqmind to be in direct competition with any of those, and as you'll see later, there's nothing similar to what we're going to build here.

So the first idea, I want you to get, is that there wasn't and still isn't enough practical tasks, even for the most popular libraries/tools. Take your own look:

check the results. Mostly dead or empty repos at the first 5 pages – worth than you might expect :(

Where are all the exercises? Are they non-searchable, or maybe the potential authors prefer to write books or tutorials? I can't tell you for sure, but the bottom line is that it's unacceptable when a mentor can't advise his/her students to just "go and practise there" because of no "there".

But what about FreeCodeCamp? – you ask. It surely contains everything you need.

Really? At the time of this writing, FreeCodeCamp proposes you the reposted version of the same NodeSchool lesson. Search "async" (for "asynchronicity"), "react" (for "reactivity") and see a vibrant nothing:

no-async

Despite all the community, all the authors, despite all the buzz they have, as a lear­ning platform, they still provide nothing but algorithms and few piles of reposted content.

But maybe their offline events provide all the missing material?! Honestly – I have no idea. I'm interested only in remote education and I look only at web content. As those sites are constantly proposed for remote learning, I have a right to analize them and make my own judgment in this category.

It's worth to agree our terms here. Some resources use an "algorithm" word to describe something more or less challenging. In the context of this series, I use it broadly – to describe all data querying or transformations. For example, I classify the following as an algorithm: [1, 2, 3].map(x => x * 2) for lack of a better term.

So basically, whatever you want to practise online these days, you have to choose between algorithms and projects with other types of practice being barely represented. Which leads us to the second point.

2. The Algorithm Fatigue

I have an ambiguous opinion on this topic. Algorithms are an integral part of programming and excellent brain sharpeners. I personally enjoy writing them. Algorithms are perfect at keeping festival candidates out of your company, as it follows from their complaints:

Whiteboarding tends to favor those with more time to spend poring over interview prep books, as well as those who come from more traditional education backgrounds [source]

While "traditionals" are busy at cheating, the better engineers pop out from nowhere ^_^

But, being honest, I still think we're overplaying it. I clearly see a slow trend of reducing the learning field to interview questions and tricks in the spirit of "get hired, then learn".

Most of the popular algorithms were surprisingly proposed by the best computer scientists: Djikstra, Hoare, Pearson, etc. And I bet they spent days thinking on those topics. It's a huge disrespect to their works to suppose an average person will "solve" the same task in 30 minutes. Oh wait, they ought to be memorized just to approve the interest...

So while I share the opinion that technical interviews are screwed, I won't ally with the current backlash for the reasons I mentioned, and I have my own line of criticism.

Programming bears many insights. As someone who spent 10+ years of life doing it, I can tell you it's impossible to reach every of them via algorithms alone.

Algorithms are rare. You may sit a few hours solving a single recursion puzzle. Not bad on itself, in sum it may give you a distorted perspective on your future activities. Most jobs will contain fewer slow-n-deep tasks and more fast-n-shallow tasks. So preparing yourself for the most challenging experience ever, you can actually be undermined by routine.

Algorithms are limited. With all the value, importance, and scope – they are simply one kind of exercises. I believe format changes are crucial at avoiding tunnel vision and reducing the number of intellectual idiots.

But no-one says you should do only algorithms! – you cry... And you're wrong – they actually DO say that implicitly with their year plans and schedules. Go and check their forums eventually, and see what's going on there.

The current go-to learning scheme is:

Learn theory → Solve exercises → Make projects → Find a job

and it's fine, in general. My objection concerns the factual substitution of "exercises" with "algorithms". Is there a single non-algorithmic challenge at CoderByte or Exercism (claiming themselves "learning platforms")? Why do people propose competition-based sites like CodeWars, CodeForces or CodeChef for beginners?

The only competitive site I recommend to my students is HackerRank. Mostly because it covers more than a single CS domain and has a great community.

3. The Project Fatigue

As you can guess, I have an objection to projects as well. The first rule of learning says "you should learn one thing at a time". What do you learn with a project? In general, you learn to see a "bigger picture" and to assemble "moving parts". Which seems a pretty logical step AFTER you saw that picture in a small and mastered that parts in isolation.

And here is the problem. We advise beginners to "take the pill" far before they are prepared to it. I wonder about the percent of such projects being finished. Is it closer to 5%? Or maybe 3%? How many potential careers were ruined due to that?

Teaching children to read, we gradually show them letters, words, simple sentences, and than simple books. Moving forward bit by bit, with a lot of practice. And the result is impressive. Teaching people to program, we show them a few "tricky algorithms" (impossible to write at that level), ask if they "understood", and then tell them to "choose a project"...

draw-a-head

And what are those projects, by the way? For example here, is a popular list people continue to recommend to beginners. It contains both hard and easy problems in a weird mix. The instinctive feeling to "go through them one by one" would be a disaster. You would end up hating yourself for stupidity: I was advised a puzzle, I was unable to solve it, everyone else did succeed – I must be stupid (on the danger of implicit assumptions).

are very advanced tasks with experts arguing endlessly about how to approach them. They can be simplified, but that does require skills! In engineering, simplicity is what you achieve, and not what you receive by default.

And how a beginner is supposed to act then?!

So please give everyone a favor – stop advising realistic projects to people who had no practice. They should solve exercises, tasks, challenges and whatever artificially constructed cases where the required knowledge is thoughtfully distributed.

4. The Missed Variety

It's time to show why algorithms and projects don't cover the whole picture.

What about refactorings? In real programming you spend a lot of time rewriting code. And yet you never studied this: you never practised to "improve functions" or "name variables" in separate exercises. In the best case, you can rely on some book clues, but mostly you are left to yourself and your own intuition.

A growing amount of questions like:

proves there's a clear interest in then community to such kind of content. There are good books and articles dedicated to "patterns" and "best practices". Yet somehow there's no platform available to teach us that. CodeWars and others may provide some "bugfix" tasks but that's about it.

What about theory consolidation? I can't imagine myself hiring a person who doesn't know a function definition. Not because it's important by itself, but more because it clearly signals a lack of understanding of derived concepts:

set -> relation -> function -> side effects -> pure function ->
memoization -> caching -> concurrency -> distributed systems

So you'd be happy to check your grasp of important concepts like "function", "closure" or "prototype" right after you studied them. Where are those quizzes?

What about inductions and analogies? With questions like:

raised frequently. Comparison is one of the strongest tools of reasoning known to humans. Never heard of a place to practise it.

While quizzes can express the binary is/isNot/has/hasNot relations and so are able to express some primitive split-in-two cases, they fail with more complex ones. Exercises like "Match X to Y" or "Sort X by Y" seem very natural to us. How many of those did you ever see?

What about tests? Spend a year solving 1000 algorithms on a platform (reaching the "Ninja Tier") – and you're still a person who wrote 0 tests. I imagine that's a painful experience. Or do you think writing tests is intuitive and needs no practice? Sorry to disappoint, but there wouldn't be QA/tester jobs in that case.

The volume of auxiliary code (tests/deployment) can easily exceed the volume of business one in mature big projects and especially in those written in dynamic languages. So you'll regret the absence of such skills immediately after you get an employment.

What about other engineering activities and the questions they typically cause?

Not to mention the non-technical skills and intuitions engineers should possess:

I'd like us to agree, at this point, that they are missed mostly because it's unclear how to verify them without a human.

5. The Sandboxes

Every respectable platform today tries to create its own editor + runtime sandwich and to force you to code there.

Usually, it goes like this: you have to read a text rendered in a small-font/low-contrast setup, squeezed inside a tiny panel. Then you have to write a code in another clunky panel with the line breaks going out of control. Don't forget to aim-n-scroll for tests. Everything is shaky and personally, I have a claustrophobic sense just from the look of it.

codewars-editor

Needless to remind how many bugs those "editors" carry and how uncomfortable the experience usually is. Interactive environments of the current breed are perfect to teach the uncertainty and the lack of control, if those are what we want to teach younger people.

Bugs can be fixed and UI improved... But the whole idea of cloud-based editors is justified when you have to code in pair or interview someone, but goes under a bold question mark, when you actually want to learn or work unobtrusively. And the software quality is only one reason.

While CodeWars' UI is decent in comparison, it will never be able to compete with a simplicity or performance of native UIs:

webstorm-editor

Here, on your own machine, you have an editor, a terminal, fonts and line-heights of your choice. Colors, separators, layout – are all tuned to your preferences. You're immune to an evil designer willing to ruin the scheme tomorrow, as well as a dumb programmer going to introduce a new bug, breaking a critical milestone.

Getting a bug in an editor, you simply roll back to a previous version (no auto-updates to begin with). Getting a bug in a web app (breaking the access to core data), you can only cry on Internet. No web-based software should have a full control over your workflow. You don't want to fall entirely into their disposal.

Even more important is the fact that you run an actual code. If tests are broken – you can immediately fix them. No tickets, no interruptions... Did you ever push a wrong solution only to pass wrong tests? I did.

It's obviously better to use the same tools and processes for work and learning. Why there have to be a difference? Why would we praise Vagrant and Docker if such differences were good?

I can see a value in sandboxes for special cases. Mostly for kids – something visual like "Write a code to help this poor turtle to go out of the maze". But I can't imagine an adult dude spending a year of life at FreeCodeCamp, constantly pasting his snippets to some form for a feedback available on his own machine with much fewer keystrokes and mousemoves. It's just humiliating.


In the next article we're going to look behind the curtain and find out the causes of the forementioned problems. Without that knowledge, we'd be at risk of repeating them. Thanks for reading, stay tuned!