Tombuntu

Four Tweaks for Using Linux with Solid State Drives

SSDs (solid state drives) are great. They’re shock resistant, consume less power, produce less heat, and have very fast seek times. If you have a computer with an SSD, such as an Eee PC, there are some tweaks you can make to increase performance and extend the life of the disk.

  1. The simplest tweak is to mount volumes using the noatime option. By default Linux will write the last accessed time attribute to files. This can reduce the life of your SSD by causing a lot of writes. The noatime mount option turns this off.

    Open your fstab file:

    sudo gedit /etc/fstab
    

    Ubuntu uses the relatime option by default. For your SSD partitions (formatted as ext3), replace relatime with noatime in fstab. Reboot for the changes to take effect.

  2. Using a ramdisk instead of the SSD to store temporary files will speed things up, but will cost you a few megabytes of RAM.

    Open your fstab file:

    sudo gedit /etc/fstab
    

    Add this line to fstab to mount /tmp (temporary files) as tmpfs (temporary file system):

    tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
    

    Reboot for the changes to take effect. Running df, you should see a new line with /tmp mounted on tmpfs:

    tmpfs                   513472     30320    483152   6% /tmp
    
  3. Firefox puts its cache in your home partition. By moving this cache in RAM you can speed up Firefox and reduce disk writes. Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.

    Open about:config in Firefox. Right click in an open area and create a new string value called browser.cache.disk.parent_directory. Set the value to /tmp.

  4. An I/O scheduler decides which applications get to write to the disk when. Because SSDs are so different than a spinning hard drive, not all I/O schedulers work well with SSDs.

    The default I/O scheduler in Linux is cfq, completely fair queuing. cfq is works well on hard disks, but I’ve found it to cause problems on my Eee PC’s SSD. While writing a large file to disk, any other application which tries to write hang until the other write finishes.

    The I/O scheduler can be changed on a per-drive basis without rebooting. Run this command to get the current scheduler for a disk and the alternative options:

    cat /sys/block/sda/queue/scheduler
    

    You’ll probably have four options, the one in brackets is currently being used by the disk specified in the previous command:

    noop anticipatory deadline [cfq]
    

    Two of these are better suited to SSD drives: noop and deadline. Using one of these in the same situation, the application will still hang but only for a few seconds instead of until the disk is free again. Not great, but much better than cfq.

    Here’s how to change the I/O scheduler of a disk to deadline:

    echo deadline > /sys/block/sda/queue/scheduler
    

    (Note: the above command needs to be run as root, but sudo does not work with it on my system. Run sudo -i if you have a problem to get a root prompt.)

    You can replace sda with the disk you want to change, and deadline with any of the available schedulers. This change is temporary and will be reset when you reboot.

    If you’re using the deadline scheduler, there’s another option you can change for the SSD. This command is also temporary and also is a per-disk option:

    echo 1 > /sys/block/sda/queue/iosched/fifo_batch
    

    You can apply the scheduler you want to all your drives by adding a boot parameter in GRUB. The menu.lst file is regenerated whenever the kernel is updated, which would wipe out your change. Instead of this way, I added commands to rc.local to do the same thing.

    Open rc.local:

    sudo gedit /etc/rc.local
    

    Put any lines you add before the exit 0. I added six lines for my Eee PC, three to change sda (small SSD), sdb (large SSD), and sdc (SD card) to deadline, and three to get the fifo_batch option on each:

    echo deadline > /sys/block/sda/queue/scheduler
    echo deadline > /sys/block/sdb/queue/scheduler
    echo deadline > /sys/block/sdc/queue/scheduler
    echo 1 > /sys/block/sda/queue/iosched/fifo_batch
    echo 1 > /sys/block/sdb/queue/iosched/fifo_batch
    echo 1 > /sys/block/sdc/queue/iosched/fifo_batch
    

    Reboot to run the new rc.local file.

    [update] Commenter dondad has pointed out that it’s possible to add boot parameters to menu.lst that won’t be wiped out by an upgrade. Open menu.lst (Remember to make a backup of this file before you edit it):

    sudo gedit /boot/grub/menu.lst
    

    The kopt line gives the default parameters to boot Linux with. Mine looks like this:

    # kopt=root=UUID=6722605f-677c-4d22-b9ea-e1fb0c7470ee ro
    

    Don’t uncomment this line. Just add any extra parameters you would like. To change the I/O scheduler, use the elevator option:

    elevator=deadline
    

    Append that to the end of the kopt line. Save and close menu.lst. Then you need to run update-grub to apply your change to the whole menu:

    sudo update-grub
    

    [end update]

Want to know how fast your SSD or other storage device is? Using hdparm you can test the read performance of your disk:

sudo hdparm -t /dev/sda

The 4 GB SSD on my Eee PC 901 gets about 33 MB/s. My desktop PC’s hard drive gets about 78 MB/s. (What hdparm doesn’t show is that the seek time for an SSD is much, much lower than a hard disk.)

Have any other suggestions for SSDs, or disagree with any of these? Leave a comment to let me know.

Archived Comments

jobr

You are one fine piece of an evil Ubuntu mother******…

Speed it up… faster, wilder and louder :)

Nice post… You rock my Google Reader:)

dondad

Thee is a section in the grub menu called default where you can put boot parameters and they will be added to the end of grub menu updates. I do this on my Ubuntu installation because I have to have reboot=b as an option or my box won’t reboot properly.

Andreas

Great post, thank you!

There are a few things you could improve. First, the sysfs options can also be set at boot time in a comfortable way using the package “sysfsutils” from the universe repository. You can configure your values to be set at startup in /etc/sysfs.conf. Just add lines like
“block/sda/queue/scheduler = deadline” or
“block/sda/queue/iosched/fifo_batch = 1”

You can work around your sudo problem elegantly by using the “tee” command:
echo deadline | sudo tee /sys/block/sda/queue/scheduler

(tee writes the standard input to both the standard output and the file provided as parameter.)

Keep up the good posts!
Andreas

Syn

That’s pretty sweet, I never thought of using tee like that. Of course the reason you can’t use sudo while redirecting stdout to a file (>) is because the fopen() for the redirect happens in the current shell, and the sudo is executing the child process (echo) as a super user.

Ketil

“Using a ramdisk instead of the SSD to store temporary files will speed things up, but will cost you a few megabytes of RAM.”

Does it make sense to do this also on a normal hard drive?
What are the side effects, if any?

Syn

I would recommend mounting /tmp and /var/tmp tmpfs on any modern system. Speed benefits aside, there are often files which are left in temp dirs by sloppy programs, and these will fill up your drive at best, and compromise your security at worst. One example would be when using “Open With” in firefox it places that file in /tmp, but will not delete it if you click “Clear all” and the file is locked by the application you opened it with. Starting off fresh every boot is a good way to clean it out automagically.

Tom

Andreas:
Thanks for the tips! Now I can see why sudo wasn’t working.

Ketil:
A ramdisk is still much faster than a hard drive. It could still be worth while, but you won’t notice as much as a difference than with a SSD.

flow

Well, did so on my XPS notebook with an HD, since I hardly use the 2Gigs of RAM (except when running a virtual machine.. which is rare). Works quite nice, and my personal experience is that it’s become a bit faster.

MadMike

The “noatime” tweak is outdated.

Newer incarnations of the linux kernel over a “relatime” option which updates the access time only as soon as a file is written and thus will reduce the writes equaly as a “noatime” would, while still updating the access-time of each file and thus retaining some backwards compatibility to the tools which still rely on updated access-time information.

You will find that current ubuntu installations all have “relatime” parameters allready on by default :)

anon2

Here is some actual information:

http://thunk.org/tytso/blog/2009/03/01/ssds-journaling-and-noatimerelatime/

kurazu

In many places people are also suggesting to use ext2 file system instead of ext3. Ext3 writes journal to disk, which adds a few write-cycles.

ntg

Just so that people looking now at this really nice post wont be confused: Ext4 is now the way to go with an ssd, since it can make use of the special way ssd needs to clean a block before rewrite

Kevin Burton

Two notes…

Even with relatime the atime data needs to be in another section of the disk so it will still require a write.

Also, you might want to consider the noop scheduler as IO schedulers on Linux (at least the current ones) don’t make a heck of a lot of sense in the SSD world.

Kevin

Jaime Iniesta

Thanks Tom! I don’t have an SDD but your tips for the tmpfs and the firefox cache helped me a lot. Now my hdd is more silent and I hope it will last more years.

Richard

See http://www.storagesearch.com/ssdmyths-endurance.html for more about the ‘excessive writes wear out flash drives’ issue, which is mostly a myth at present. Basically you have to write evenly across whole SSD, as fast as possible and 247, to even have a chance of wearing it out, and it typically would take many years even with this unrealistic usage pattern.

Lord Rybec

Actually, this is not a myth supposedly based on experience. This is a known disadvantage inherent to flash technology. Writing to flash chips damages the chip. It can take a very long time to cause sufficient damage to actually destroy the data integrity of a chip, but excessive writing (especially to the same blocks, like most modern OSs do) can dramatically reduce the life of flash media.

One of the reasons tests may not show this is because most flash drives contain extra unused memory that is used as failover when bytes on the regular part of the drive fail (I am sure the redirection to failover, once a block has failed, increases read and write times, although only marginally).

The problem with modern flash manufacturers is that they do not list technical specs on the package. This means I do not know if that drive can withstand 10,000 writes per block, or over 1,000,000 writes per block. This also means I do not know if the drive has 5 or 10 blocks of failover space, or half the total drive capacity.

Anyway, excessive write use, especially to the same location, very well can wear out a flash drive far more quickly than a hard drive. Modern technology and failover techniques can slow this, but it depends greatly on how much failover space the manufacturer provided and the technology used by the drive, and since these are not listed on the package, the dependability of any given drive is subject to many unknown factors.

In any case, even if I did have a drive known to have an average life that is 3 times that of magnetic hard drives, I would still use these techniques, because they will extend the life of the drive at least two if not five to ten times as long as it would otherwise have been.

Note: A user would not have to write the entire drive over and over to quickly destroy a flash drive. He would just have to focus on a single block, writing it until it failed, then continue the same pattern until all of the failover space was also destroyed. Temporary data storage and virtual memory use in most modern OSs fits this pattern almost perfectly. Once a single block has been made unreliable, the reliability of the entire drive is gone.

Another big disadvantage of USB flash drives is heat (having less experience with SSDs, I do not know if this is also a problem for them). Try copying a 25MB file(or folder)to your USB drive, then feel the drive’s case. Imagine how hot the chips inside must be for you to be able to feel that heat through the plastic case. This heating (and then cooling in between reading or writing) is what destroys most small electronics. Reducing write cycles to USB flash drives will help circumvent this wear. If you are using USB drives, internally, for hard drives, I would recommend at least removing the plastic cases if not adding heat sinks to the chips, to keep them cooler.

(Several years ago I was working on using a USB flash RAID for a mini-ITX motherboard and I did an enourmous amount of research on this. Because flash technology has improved a good deal since then, I am going to actually try it this time. My recent research has found that the limited write cycles of flash is still a big concern when using it for running an operating system, but modern flash technology coupled with the techniques listed on this page should yield a good long term use system.)

Lord Rybec

cbo

Mr Rybec, you write “Note: A user would not have to write the entire drive over and over to quickly destroy a flash drive. He would just have to focus on a single block, writing it until it failed, then continue the same pattern until all of the failover space was also destroyed.”

I think you misunderstand why “writing the entire drive” is brought up in this context. Current SSDs supposedly incorporate “wear levelling”, where virtual blocks are mapped to real blocks so that each block gets a similar amount of wear. Therefore, writing to one location repeatedly would produce no more wear than “writing the entire drive”.

Your quote above would thus be FUD, except for the fact that as you observe, manufacturers don’t give details about how their drives work. It is not just the redundant blocks however, as you suggest, but rather, the actual wear levelling algorithm which is employed.

Anonymous

Richard: yet I remember some guy saying that he’s seen a wee bit too many burnt-out CF cards during his lab work to still put much belief into such statements.

IOW:

…better be safe than sorry…

anon2

CF cards are not SSDs. They may both have flash memory under the hood, but SSDs have controllers in them, and that is what is doing the remapping and uses the ‘spare’ memory.

The end result is that CFs will be an order of magnitude, or more, worse in wear out.

Anonymous

In my experience, there is still a definite difference between ‘noatime’ and ‘relatime’. ‘relatime’ is doing a lot more writes and slowing things down (note: I’m only concerned with performance here, not the idea that I’m going to wear out the SSD too fast).

I’m using the noop scheduler and it seems fine; much better than the default. I haven’t tried testing it against the deadline scheduler, though.

I use ext2 as well. I rarely have unclean shutdowns, and hey, I used it for years before ext3 existed without a problem. Seems to me more likely that the SSD as a whole would fail than that I would lose data to an unclean shutdown, and I have full backups anyway.

Bill Goldberg

Thanks for this, I had the write issues and firefox was slow on the eeepc 900, this fixed it.

Izkata

I have a normal hard disk, and moving Firefox’s cache to a ramdisk (/dev/shm is mounted as a 2 GB tmpfs on Ubuntu by default) has a small but noticeable speedup.

Olivier

putting the firefox disk cache on a ramdisk makes no sense to me, Firefox already has a memory cache. The disk cache is the 2nd level cache. Why not disable the firefox disk cache completely and set firefox to cache only in memory?

Mnemnonic

You will see the difference when you close firefox and start it again. The memory cache in FF is wiped then, while the ramdisk-cache is not.

Anonymous

I was using the noop scheduler for my 30GB OCZ SSD and decided to try the deadline scheduler after reading your article. No major throughput differences but it seems the deadline scheduler uses a few less cpu resources. This is more of a subjective observation using the gnome system monitor.

cfq ~59 MB/sec
noop ~62.5 MB/sec
deadline ~62.5 MB/sec
anticipatory ~62 MB/sec

Greg Lindahl

Is nodiratime still a valid Linux thing?

Björn

noatime implies nodiratime, so only noatime is needed in fstab

Anonymous

Firefox puts its cache in your home partition. By moving this cache in RAM you can speed up Firefox and reduce disk writes. Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.

Open about:config in Firefox. Right click in an open area and create a new string value called browser.cache.disk.parent_directory. Set the value to /tmp.

Where is that file about:config?

i can’t find it.

email me, please and thanks :-)

interbo

type in the address bar

Tom

Anonymous:
about:config is not a file, it’s a special configuration address. Just type it into your Firefox address bar and press Enter.

Anonymous

I would think that

sudo grub-set-default elevator=noop
sudo update-grub

should alos work, no?

Anonymous

oh, no – grub-set-default seems only for boot entry, not for default parameters… sorry.

Dr. Who

I had problems with Hardy Heron where Mozilla would not shut down right away when I clicked it off. I would get the “do you want to wait or shut the application down now” box for a couple seconds. I applied the first two tweaks to my system and now I don’t have the problem any more. It works fine..!!
This is on a NON-SSD, standard hard drive installation.

Anonymous

Do these instructions need to be updated with the most recent updates on ubuntu? My eeepc ssd is sluggish again.

gus3

Here’s a crazy thing that worked for me, YMMV:

My 256M SD card was mis-behaving. I was unable to format it for any filesystem, VFAT, ext2, ext3 (yes, I know, journal bad for SSD). So just for giggles, I ran “badblocks -w” against it, a destructive test that writes all 1’s, all 0’s, 10’s, and 01’s.

It reported no bad blocks.

I was then able to format the SSD with ext2.

Maybe the badblocks program actually fixed the failing bits, and maybe it was just a fluke. As I said, YMMV. But, if the SSD is failing anyway, what do you have to lose? After all, you have backups, right? Right?

Anonymous

The reason is that SSDs work on preparing pages for writing by erasing them. There are several blocks per page. Typical page size is 4KiB, and typically 128 blocks per page, making erasures one half MiB each. Having added -w, you just told your ‘puter to write over every single last block of your SSD four times over. Because you touched every single block for write, you erased everything. Otherwise there will be discrepancies between what’s written and what needs to be erased. That’s what the more modern TRIM command is about, informing the drive of which blocks it may preemtively erase. It’s that erase/program (write) cycle which takes a long time, and thus makes your write performance sucky. And it’s exacerbated by making a tiny write (compared to the page size) which makes it appear to wear faster (Imagine it as doing 128 writes of 4KiB or less each (such as a simple mtime update), then the 129th might have to perform an erase before it can be actually performed, even if it’s the same block being written as far as your OS is concerned…the SSD’s wear-leveling algorithms will spread that out over many physical blocks).

GHerZog

My EEEpc 701 4G just went haywire with an older version of EEEBuntu, so I am loading the latest V.3 to correct the problem.

In any event, I am staying with ext2, noatime, and no swapdisk as ways to conserve the SSD’s useful life.

EEEpc Users Forum offered a wonderful little bit of software that allows me to use a 2Gbyte SDcard to make a full backup image of the 4Gbyte SSD. I can restore in 6 minutes if all else fails.

We are now up to ext4, but I am reluctant to use the journalizing.

gHerzog

Well, I upgraded my EEEbuntu to 9.xx and really wanted these tweaks to work. No problem with ‘noatime’, but the shifting over of the /tmp area seems to make the configuration unable to mount my SDcards or USB memory sticks.

I am somewhat of a conservative. For the SSD, there is no swap disk, ext2 for the file system, and ‘noatime’.

My machine is an EEEpc 701 4G.

leonardo

I understand that the noop/deadline scheduler is a good choice for SSD drives but what happens when I plug a spinning/traditional usb hard disk to my mini9?
I use such external hd for my backups and I really don’t want to mess it up!

iform

Just set the deadline/noop scheduler for sda on startup. I beleive cfq will be used for all subsequent media. Specifying the scheduler as a kernel parameter sets it system-wide.

I put this in a file executed on startup. /dev/sda is my ssd.
echo deadline > /sys/block/sda/queue/scheduler

Chris Barrow

Tried the tip on moving my Firefox browser to /tmp and found that Firefox started using 50% of my CPU so I had to remove the setting. Did anyone els have this issue?

khurtsiya

\=================
Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.
\=================

Can anybody give step-by-step code?

TIA

Michael

James

Switching to deadline sure does speed up my eeePC 1000. Thanks!

dan

I and my 901 thank you for this. Earned a place in my Tips and Tricks Bookmarks folder to try out.

blue

Q: How do I convert my ext3 partition back to ext2?
Actually there is only little need to do so, because in most cases it is sufficient to mount the partition explicitely as ext2. But if you really need to convert your partion back to ext2 just do the following on an unmounted partition:

tune2fs -O ^has_journal /dev/hdaX

To be on the safe side you should force a fsck run on this partition afterwards:

fsck.ext2 -f /dev/hdaX

After this procedure you can safely delete the .journal file if there was any.

CyrIng

Hello

About FS duplication :
I want to move my Arch x64 Linux root & /home FS from an ext4 HD to a brand new SSD (Corsair P64).

Should I just cp all files to a new ext2 ? (+ noatime , etc.)
Is ext4’s journal may also be copied during cp ?

Or should tar be better ? Or just an Arch reinstall ?

Thank you for your advices

zampasoft

Is there anything that we can do regarding /var/log ? Is there anything that we could move to temporary directory?

Vincent

@zampasoft Sure, see Allan Feid’s solutiona at http://allanfeid.com/content/eee-tips-reduce-disk-writes-utilizing-tmpfs

I expanded on this and now have a script setup that upon shutdown or reboot remembers all directories & files in /var/log (including access rights, owner and group settings).

To use it is easy enough:
1> Create a script call svlogdir.sh somewhere and make it executable (chmod +x svlogdir.sh).
2> Open with gedit and put text at bottom of this reply in the file (gedit svlogdir.sh)
3> Run it as root to install (sudo ./svlogdir.sh)

You also need to edit your fstab (sudo gedit /etc/fstab) and put in the following line at the end to make /var/log a tmpfs:
tmpfs /var/log tmpfs user,noatime,mode=0755 0 0

Once you have completed both, do the following commands to complete the setup:
sudo rm -rf /var/log/*
sudo /etc/init.d/mklogdir.sh
sudo reboot

TEXT AFTER THIS LINE GOES INTO svlogdir.sh FILE
#!/bin/sh

if [ `id -un` != root ] ; then
echo “`basename $0`: Must be effective root user”
exit 1
fi

# install svlogdir.sh if needed
if [ ! -e /etc/init.d/svlogdir.sh ] ; then
cp $0 /etc/init.d/svlogdir.sh
/bin/chmod +x /etc/init.d/svlogdir.sh
cd /etc/rc0.d ; /bin/ln -sf ../init.d/svlogdir.sh S15svlogdir.sh ; cd - >/dev/null
cd /etc/rc6.d ; /bin/ln -sf ../init.d/svlogdir.sh S15svlogdir.sh ; cd - >/dev/null
fi

# create /etc/init.d/mklogdir.sh if needed
if [ ! -e /etc/init.d/mklogdir.sh ] ; then
/usr/bin/touch /etc/init.d/mklogdir.sh
/bin/chmod +x /etc/init.d/mklogdir.sh
cd /etc/rc2.d ; /bin/ln -sf ../init.d/mklogdir.sh S05mklogdir.sh ; cd - >/dev/null
fi

# redirect stdout to /etc/init.d/mklogdir.sh
exec > /etc/init.d/mklogdir.sh

echo “#!/bin/sh”
# parse all directories in /var/log
for dir in `/usr/bin/find /var/log/* -type d ! -name . 2>/dev/null | /usr/bin/sort` ; do
echo “if [ ! -e $dir ] ; then”
echo “ /bin/mkdir $dir”
echo “fi”
mode=`/usr/bin/stat -c %a $dir`
owner=`/usr/bin/stat -c %U $dir`
group=`/usr/bin/stat -c %G $dir`
echo “/bin/chmod $mode $dir”
echo “/bin/chown $owner:$group $dir”
done
# parse all files in /var/log
for file in `/usr/bin/find /var/log -type f 2>/dev/null` ; do
echo “if [ ! -e $file ] ; then”
echo “ /usr/bin/touch $file”
echo “fi”
mode=`/usr/bin/stat -c %a $file`
owner=`/usr/bin/stat -c %U $file`
group=`/usr/bin/stat -c %G $file`
echo “/bin/chmod $mode $file”
echo “/bin/chown $owner:$group $file”
done

exit 0

Mathieu

Your SSD sucks. You should not be getting half the write speed to your SSD compared to a hard drive. Decent SSD drives will write in the 180-200 megabytes/sec range w/ the type of sequential write test that hdparm does.

http://www.anandtech.com/storage/showdoc.aspx?i=3667

arthur

the sudo command is not working because the > causes the output to be written to the file by the currently running shell. you need a sudo’ed process to write it so fire up bash in sudo and give it the command to run with the -c (for commnad) option:

sudo bash -c “echo deadline > /sys/block/sda/queue/scheduler”

Cdh

You don’t need bash here because sudo already includes this functionality:
sudo -s “echo deadline > /sys/block/sda/queue/scheduler”

Solid state hard drives

This is precisely why LED backlighting is more interesting to me in a ThinkPad than SSD. The new Fujitsu T2010, Toshiba R400, and Compaq 2710p tablets all feature LED backlighting (as does the upcoming Dell Latitude XT tablet). Why doesn’t the X61t? While the X61t is an excellent machine, including the unusual IPS-type display, the loss of brightness and battery life of CFL versus LED makes it difficult to recommend to most users.

Pierre

sudo gedit /boot/grub/menu.lst

should be

sudo gedit /etc/default/grub

Anarchist

The first one id for legacy GRUB. The later is for GRUB2.

Thanks and keep the good will…!!!

racecar56

About “echo deadline > /sys/block/sda/queue/scheduler” not working on sudo:

Just use:

sudo sh -c “echo deadline > /sys/block/sda/queue/scheduler”

someone

use UUID in /etc/rc.local:

echo deadline > /sys/block/$(blkid -U 01111111-aaa1-1111-1111-aaa111111111|cut -c 6-8)/queue/scheduler
echo 1 > /sys/block/$(blkid -U 01111111-aaa1-1111-1111-aaa111111111|cut -c 6-8)/queue/iosched/fifo_batch

Charles Twardy

I like the article, but I am inclined to agree with Mathieu above. The system pauses during write and the slow throughput suggest Tom had a flawed first-gen SSD. I’m curious whether the IO scheduler trick is necessary with an Intel SSD, or one of the second-gen cards.

joaopft

All schedulers assume seek time is very large compared to sequential read time. That is not tne case with SSDs. Hence, the scheduler assumptions will cause problems that get worse when more complex SSD controllers are present. In fact, a good SSD controller will have a scheduler of his own, and this may cause conflicts.

CFQ is very complex. It gives highest priority to real-time reads and puts everything else on a queue using some complicated stochastic model. It has a high probability of conflicting with the SSD controller.

Deadline is very simple. It also prioritizes real-time processes; but all other requests simply have a deadline assigned and are executed in elevator order (the lower address closest to the last read is read). When a deadline of some request is reached, the scheduler jumps to that request. Users are finding that deadline works reasonably well with the SSD controller.

Anticipatory is very closely tied with hard-drive structure. It waits some time for nearby reads (which is an irrelevant strategy in SSDs), and then proceeds in elevator order, like in deadline (except that there are no deadlines!). It should never be used with SSDs.

Noop is the simplest, and just assumes that scheduling is done by the SSD (or hardware RAID) controller. It serves requests in the order they are received. Theoretically is the best choice for SSDs, but its performance depends heavily on the “smartness” of the scheduller in the SSD controller. We can only guess that…

toga

If i add the elevator=deadline option to the default parameters in /boot/grub/menu.lst, what should i do with the echo 1 > /sys/block/sd?/queue/iosched/fifo_batch commands. Should i add them to /etc/rc.local, or aren’t they needed in this case?

SamD

1. Add the ‘discard’ parameter to the fstab line for your SSD when you add the ‘noatime’ param.

2. Using ext3 or ext4, you can disable the journaling with:
tune2fs -O ^has_journal /dev/sdx
e2fsck -f /dev/sdx

Note: Boot from a live CD to do this, since the disk must be dismounted at the time. Reboot when done. Check with ‘dmesg | grep -i ext3’ or ext4.

3. Ubuntu 10.10 doesn’t have the /sys/block/sdx/queue/iosched/fifo_batch parameter.

4. For more detail, see: http://cptl.org/wp/index.php/2010/03/30/tuning-solid-state-drives-in-linux/

Sigurd Mellqvist

Perhaps a dumb question..

but when i add the tmpfs option in fstab i have like 2 when i run df -h

tmpfs 1.9G 76M 1.9G 4% /tmp
tmpfs 768M 876K 767M 1% /run

do i need both of these or can i remove one? And how?

Varun

Hi,

I would like to implement my own disk scheduler in linux ubuntu. What is the procedure to do that and which of the files do i have to modify?

Respond via email