Subscribe: Comments on: Quickly preloading Innodb tables in the buffer pool
Added By: Feedage Forager Feedage Grade A rated
Language: English
blackhole  col  count  data  index col  index  innodb tables  innodb  kevin  load  memory  mysql  patch  preloading innodb  query  select  tables 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Comments on: Quickly preloading Innodb tables in the buffer pool

Comments on: Quickly preloading Innodb tables in the buffer pool

Last Build Date: Tue, 12 Dec 2017 23:44:35 +0000


By: Satej

Fri, 06 May 2016 03:51:23 +0000

I would just like to make the indexes load into memory. Would something like below do? SELECT count(index_col) from tbl WHERE index_col like “%0%” I think this would do a full index scan only.

By: Peter Hulstaert

Tue, 20 Jan 2015 09:54:11 +0000

At work we use databases that are to big to fit in memory, but we do want to preload after a restart of the service. We usually do a SELECT count(*) FROM TableA WHERE datecol >= curdate() - INTERVAL 366 DAY and (col1 like '%0%' or col2 like '%0%' or col3 ..... ) Like this we only load the current data in our memory.

By: Billy

Mon, 25 Aug 2014 00:51:04 +0000

This trick had no effect at all on query speeds when I tested it on MySQL 5.5.37. I used the "SELECT count(*) from tbl WHERE index_col like “%0%”" query for every index column.

By: Webmaster Info

Sat, 14 Sep 2013 11:59:26 +0000

I do not write a great deal of remarks, but i did a few searching and wound up here Quickly preloading Innodb tables in the buffer pool - MySQL Performance Blog. And I do have 2 questions for you if you tend not to mind. Could it be just me or does it give the impression like a few of the responses look as if they are coming from brain dead people? :-P And, if you are posting on additional places, I would like to follow anything new you have to post. Would you make a list of all of all your communal sites like your Facebook page, twitter feed, or linkedin profile?

By: peter

Wed, 28 Jan 2009 19:26:46 +0000

Mark, This is nice patch but it is a patch. Plus also it is proof of concept so I'm not sure it is good enough to be used widely in production. Note as long as you start messing with data such a way you've got to deal with a lot of niche cases to make things work 100% - for example what happens if you crash during writing dump file ? How to deal with such file potential corruptions (store checkums and validate them) etc. But in general it is nice idea for many workloads.

By: Mark Rose

Wed, 28 Jan 2009 18:11:58 +0000

There's an interesting patch to partially solve this problem:

By: jim

Wed, 07 May 2008 22:46:36 +0000

couldn't you use kevin's blackhole idea, but by creating a temporary blackhole table using create/select? CREATE TEMPORARY TABLE blackhole_foo ENGINE = BLACKHOLE SELECT * FROM foo ORDER BY id it does seem a shame that 'create table like' doesn't allow for specifying the engine.

By: Kevin Burton

Mon, 05 May 2008 03:31:59 +0000

Venu, This only works for tables that fit 100% in memory. Some people use the LRU semantics to cache their data. It's harder to get this working as you have to 'warm up' a box with new load. One way to this could be to slowly expose it to more and more traffic. Kevin

By: venu

Mon, 05 May 2008 02:19:31 +0000

It will be nice if InnoDB/MySQL can implement something like ... innodb_pre_load_tables=List_of_tables as configuration variable allowing frequently used tables to be loaded (provided it fits) Along with something like .. PRELOAD [FULL] TABLE foo

By: peter

Sun, 04 May 2008 04:52:10 +0000

Kevin, Too many statements :) I'm also wondering if it is faster or slower than count() query as it has to go through insert code path at least on server side. There are two other potential issues - it is possible for not innodb tables (or not primary keys) last query to pick to do filesort instead of table scan. Plus if you repeat the thing for bunch of indexes you will have considerable overhead by accessing row each time, not traversing index only.