Write Case: If you have something to write to memory and you have a good memory controller, ignoring all caching, all you have to do is send a transaction to the memory controller with the data you want written. Because of memory ordering rules, as soon as the transaction leaves the core, you can move on to the next instruction because you can assume the hardware is taking care of the write to memory. This means a write takes virtually no time at all.
Read Case: On the other hand, a read is an entirely different operation and is greatly assisted by caching. If you need to read in data, you can't go on to your next step in your program until you've actually got the data in hand. That means you need to check caches first and then memory to see where the data is. Depending on where the data is at, your latency will suffer accordingly. In a non-threading, non-pipelined core, non-prefetching system, you're just burning core cycles waiting on data to come back so you can move on to the next step. Cache and memory is orders of magnitude slower than core speed/register space. This is why reading is so much slower than a write.
Going back to the write transaction, the only issue you may run into with speed is if you're doing reads after a write transaction to the same address. In that case, your architecture needs to ensure that your read doesn't hop over your write. If it does, you'll get the wrong data back. If you have a really smart architecture, as that write is propagating out towards memory, if a read to the same address comes along, the hardware can return the data way before it ever gets out to memory. Even in this case of read-after-write, it's not the write that takes a while from the core's perspective, it's the read.
From a RAM perspective: Even if we're not talking about a core and we're only talking about RAM/memory controller, doing a write to the MC will result in the MC storing it in a buffer and sending a response back stating that the transaction is complete (even though it's not). Using buffers, we don't have to worry about actual DIMM/RAM write speeds because the MC will take care of that. The only exception to this case is when you're doing large blocks of writes and go beyond the capabilities of the MC buffer. In that case, you do have to start worrying about RAM write speed. And that's what the linked article is referring to. Then you have to start worrying about the physical limitations of reading vs writing speeds that David's answer touches on. Usually that's a dumb thing for a core to do anyway; that's why DMA was invented. But that's a whole other topic.