Discussion:
11.70.FC6 linux load from seems afwully slow.
(too old to reply)
s***@t-online.de
2012-12-20 18:16:54 UTC
Permalink
Hello ALl,

I am trying to optimize a load from ( actually a dbimport)
Initially all unlogged and after each test dbspaces are dropped and recreated
they are on a ext2 filesystem. Direct IO switched on.

Machine is on debian linux vmware 4 cpus 16 GB memory.

Configured are 3.000.000 buffers lru cleaning switched off ( 99% and 98 % am doing my own checkpointing) when i run a load from insert into table_where_rowsize_1924_bytes. Columns in the table are char, int, decimal , date types nothing really special.
the database dirtys pages at a rate of 5000 pages per second that is 10 MB a second. top says one oninit is eating aprox 95%CPU dbimport (or dbaccess)
is eating 70 % of cpu.(have tryed it with a german locale and the default, no difference)

This does not seem right.

When i instead copy a table from dbspace a to b it dirtys 100.000 pages or more a second.

When it is time to write the data to disk the i/o subsystem is doing
64 MB a second at 2000 request (iostat says so) and the io subsystem is 100 % busy. In this case writing the data to disk seems to be 6 times faster then
loading it into the buffer cache.


Last but not least writing seems to be done using 32 KB request eq the old 16 pages as MAXIO size. When i add a dbspace using onspaces it does a lot bigger requests and is capable of writing 250 MB a second.

I guess one should take a real close look to this since disks are getting faster
and this is killing performance. so if i can ask for a feature request:
get rid of MAXIO eq 16 pages, make it bigger so we can have a better troughput.


Thanks

Superboer.
Art Kagel
2012-12-20 19:11:39 UTC
Permalink
Make sure that the tables' initial extrnt is big enough to hold the entire
dataset being loaded. If you didn'tinclue the -ss flag to dbexport the
table starts out with a 16k default. extent.

Art
Post by s***@t-online.de
Hello ALl,
I am trying to optimize a load from ( actually a dbimport)
Initially all unlogged and after each test dbspaces are dropped and recreated
they are on a ext2 filesystem. Direct IO switched on.
Machine is on debian linux vmware 4 cpus 16 GB memory.
Configured are 3.000.000 buffers lru cleaning switched off ( 99% and 98 %
am doing my own checkpointing) when i run a load from insert into
table_where_rowsize_1924_bytes. Columns in the table are char, int,
decimal , date types nothing really special.
the database dirtys pages at a rate of 5000 pages per second that is 10 MB
a second. top says one oninit is eating aprox 95%CPU dbimport (or dbaccess)
is eating 70 % of cpu.(have tryed it with a german locale and the default, no difference)
This does not seem right.
When i instead copy a table from dbspace a to b it dirtys 100.000 pages or more a second.
When it is time to write the data to disk the i/o subsystem is doing
64 MB a second at 2000 request (iostat says so) and the io subsystem is
100 % busy. In this case writing the data to disk seems to be 6 times
faster then
loading it into the buffer cache.
Last but not least writing seems to be done using 32 KB request eq the old
16 pages as MAXIO size. When i add a dbspace using onspaces it does a lot
bigger requests and is capable of writing 250 MB a second.
I guess one should take a real close look to this since disks are getting faster
get rid of MAXIO eq 16 pages, make it bigger so we can have a better troughput.
Thanks
Superboer.
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
Fernando Nunes
2012-12-20 19:57:35 UTC
Permalink
Have you setup FET_BUF_SIZE?
Do you see kaio threads?

Don't forget that when you're "loading" you're reading ASCII and converting
it. It will always be slower.
A way to speed it up is to use external tables. This will be the fastest
way and will solve your loading times problem.
FET_BUF_SIZE should improve, but it should not make a dramatic difference.

It would also be interesting to check your environment I/O capabilities. It
would not be the first time that we would see VMWare saturated with a low
I/O rate.
It will help to create bigger page dbspaces (4KB at least)

Regards.
Post by Art Kagel
Make sure that the tables' initial extrnt is big enough to hold the entire
dataset being loaded. If you didn'tinclue the -ss flag to dbexport the
table starts out with a 16k default. extent.
Art
Post by s***@t-online.de
Hello ALl,
I am trying to optimize a load from ( actually a dbimport)
Initially all unlogged and after each test dbspaces are dropped and recreated
they are on a ext2 filesystem. Direct IO switched on.
Machine is on debian linux vmware 4 cpus 16 GB memory.
Configured are 3.000.000 buffers lru cleaning switched off ( 99% and 98 %
am doing my own checkpointing) when i run a load from insert into
table_where_rowsize_1924_bytes. Columns in the table are char, int,
decimal , date types nothing really special.
the database dirtys pages at a rate of 5000 pages per second that is 10
MB a second. top says one oninit is eating aprox 95%CPU dbimport (or
dbaccess)
is eating 70 % of cpu.(have tryed it with a german locale and the default, no difference)
This does not seem right.
When i instead copy a table from dbspace a to b it dirtys 100.000 pages or more a second.
When it is time to write the data to disk the i/o subsystem is doing
64 MB a second at 2000 request (iostat says so) and the io subsystem is
100 % busy. In this case writing the data to disk seems to be 6 times
faster then
loading it into the buffer cache.
Last but not least writing seems to be done using 32 KB request eq the
old 16 pages as MAXIO size. When i add a dbspace using onspaces it does a
lot bigger requests and is capable of writing 250 MB a second.
I guess one should take a real close look to this since disks are getting faster
get rid of MAXIO eq 16 pages, make it bigger so we can have a better troughput.
Thanks
Superboer.
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
--
Fernando Nunes
Portugal

http://informix-technology.blogspot.com
My email works... but I don't check it frequently...
s***@t-online.de
2012-12-30 12:18:04 UTC
Permalink
Hello Fernando,

all results including HPL:

loading 1.4 GB table

2k page copy to memory using load 140 secs, writing to disk at 64 MB/sec 22 secs total 162 secs
16 page copy to memory using load 140 secs, writing to disk at 150 MB/sec 9 secs total 149 secs

both had dbaccess eating 80 % cpu and one oninit eating 80 % CPU

HPL untuned did 36 MB /sec writing loading total 39 secs only one onpload eating 100 % cpu


HPL overtuned with:

CONVERTTHREADS 8 # Number of conversion threads per device
CONVERTVPS 8 # Max number of vps for converters (total)

# Buffer Configuration

STRMBUFFSIZE 16384 # Buffer size for server stream buffer (kbytes)
STRMBUFFERS 8 # Number of server stream buffers per device
AIOBUFSIZE 16384 # Buffer size for tape/file I/O (kbytes)
AIOBUFFERS 8 # Number of buff

on a 4 cpu box

did:

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdb 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdc 0,00 5404,00 0,00 731,60 0,00 151232,00 413,43 5,55 7,58 0,58 42,72

so 150 MB writing is 9 secs in this case the all the 4 cpus were 100 % busy.
As far as i can tell here the io size is a lot bigger then 32K.


See you

Superboer.

Art Kagel
2012-12-20 20:09:40 UTC
Permalink
Ahh, I hadn't noticed that. IO rates under VMWare are attrocious! If you
are not using the new hypervisor (vSphere? whatever) IO rates top out at
about 60MB/s. In our testing for a client we could only get a bit over
100MB/s with the older hypervisor or none at all.

Art

Art S. Kagel
Advanced DataTools (www.advancedatatools.com)
Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions
and do not reflect on my employer, Advanced DataTools, the IIUG, nor any
other organization with which I am associated either explicitly,
implicitly, or by inference. Neither do those opinions reflect those of
other individuals affiliated with any entity with which I am affiliated nor
those of the entities themselves.
Post by Fernando Nunes
Have you setup FET_BUF_SIZE?
Do you see kaio threads?
Don't forget that when you're "loading" you're reading ASCII and
converting it. It will always be slower.
A way to speed it up is to use external tables. This will be the fastest
way and will solve your loading times problem.
FET_BUF_SIZE should improve, but it should not make a dramatic difference.
It would also be interesting to check your environment I/O capabilities.
It would not be the first time that we would see VMWare saturated with a
low I/O rate.
It will help to create bigger page dbspaces (4KB at least)
Regards.
Post by Art Kagel
Make sure that the tables' initial extrnt is big enough to hold the
entire dataset being loaded. If you didn'tinclue the -ss flag to dbexport
the table starts out with a 16k default. extent.
Art
Post by s***@t-online.de
Hello ALl,
I am trying to optimize a load from ( actually a dbimport)
Initially all unlogged and after each test dbspaces are dropped and recreated
they are on a ext2 filesystem. Direct IO switched on.
Machine is on debian linux vmware 4 cpus 16 GB memory.
Configured are 3.000.000 buffers lru cleaning switched off ( 99% and 98
% am doing my own checkpointing) when i run a load from insert into
table_where_rowsize_1924_bytes. Columns in the table are char, int,
decimal , date types nothing really special.
the database dirtys pages at a rate of 5000 pages per second that is 10
MB a second. top says one oninit is eating aprox 95%CPU dbimport (or
dbaccess)
is eating 70 % of cpu.(have tryed it with a german locale and the
default, no difference)
This does not seem right.
When i instead copy a table from dbspace a to b it dirtys 100.000 pages
or more a second.
When it is time to write the data to disk the i/o subsystem is doing
64 MB a second at 2000 request (iostat says so) and the io subsystem is
100 % busy. In this case writing the data to disk seems to be 6 times
faster then
loading it into the buffer cache.
Last but not least writing seems to be done using 32 KB request eq the
old 16 pages as MAXIO size. When i add a dbspace using onspaces it does a
lot bigger requests and is capable of writing 250 MB a second.
I guess one should take a real close look to this since disks are getting faster
get rid of MAXIO eq 16 pages, make it bigger so we can have a better troughput.
Thanks
Superboer.
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
--
Fernando Nunes
Portugal
http://informix-technology.blogspot.com
My email works... but I don't check it frequently...
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
s***@t-online.de
2012-12-21 18:27:01 UTC
Permalink
Hello Art, Fernando,


First of all thanks for the responses!!

ok some numbers, sorry it did got mixed up in cutting and pasting anways:

load with shm connection and FET_BUF_SIFE=32000 or empty
no difference. and yeah i have kaio iostat tells me that the writes done
are 32 K and that the disks are almost always 80 to 100 % busy.


time onspaces -c -d tessie  -p /data2/infdev/tessie.000 -o 0 -s 15000000
Verifying physical disk space, please wait ...
Space successfully added.
** WARNING **  A level 0 archive of Root DBSpace will need to be

done.

real    1m4.566s
user    0m0.004s
sys     0m0.000s


iostat onspaces:
21.12.2012 12:11:38
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00   0,00   0,00
sdb               0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00   0,00   0,00
sdc               0,00     0,00    0,00  531,00     0,00 240160,00   904,56   143,51  271,31   1,88 100,00

dbspace with 16K
21.12.2012 11:51:42
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00   0,00   0,00
sdb               0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00   0,00   0,00
sdc               0,00     0,00    0,00 3993,60     0,00 121158,40    60,68     6,48    1,62   0,24  95,68


dbspace with 2K
21.12.2012 12:05:38

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00     0,60    0,20    2,20     0,80    11,20    10,00     0,01    3,67   0,67   0,16
sdb               0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00   0,00   0,00
sdc               0,00     0,00    0,00 1994,80     0,00 61408,00    61,57     0,95    0,48   0,44  87,20


2k:

Confirmed with onstat -Rr |grep dirty

1436536 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets, 2048 buffer size
1284888 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
1130840 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
978344 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
832952 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
679240 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,


(1130840-978344) / 5 is 30.000 pages is 60 MB / sec


Bottom line is that 16k pagesize is doing twice as good as 2k despite the fact that it also has a 32 K buffer. Looking at onspaces it seems to have a 500 K buffer and does 4 times better as 2K.

My opinion is that this performance is no good and it is getting towards discussing 2 GB chunks yes/no, this is one of the old dogs in the engine needing a face lift. You may not choose to do so, then i guess it will get worse.


regarding load:

-->Don't forget that when you're "loading" you're reading ASCII and converting it. It will always be slower.

Yeah but i do not think a factor 6 is acceptable compared to writing to disk.
and it is worse when it uses the full capability of the i/o subsystem then
it is a factor 24!!!

See you

Superboer.
Cesar Inacio Martins
2012-12-21 19:30:26 UTC
Permalink
Hi Superboer,

Did you test a I/O performance with dd?
dd if=/dev/zero bs=2k count=1000000 of=/your/chunk
dd if=/dev/zero bs=16k count=125000 of=/your/chunk

Looking the numbers don't appear be the situation bellow.. so..
anyway... here is my comments.

You said : lru cleaning switched off ( 99% and 98 % am doing my own
checkpointing)

Just make prety sure you aren't going into LRU flush. (onstat -F) once
you get into them there is no way to exit...
if you try force a checkpoint with onmode -c after LRU flush was
started, the command freeze and start only after the lru flush finhish,
and with data incoming from the load, will take a long time.

I already get into this situation and needed write a script to check
every second for the dirty buffers and force a checkpoint when reach 50%
of the buffers because at the environment I was working , sometimes the
load are so fast they load 4 GB of buffers in couple seconds.
Post by s***@t-online.de
Hello Art, Fernando,
First of all thanks for the responses!!
load with shm connection and FET_BUF_SIFE=32000 or empty
no difference. and yeah i have kaio iostat tells me that the writes done
are 32 K and that the disks are almost always 80 to 100 % busy.
time onspaces -c -d tessie -p /data2/infdev/tessie.000 -o 0 -s 15000000
Verifying physical disk space, please wait ...
Space successfully added.
** WARNING ** A level 0 archive of Root DBSpace will need to be
done.
real 1m4.566s
user 0m0.004s
sys 0m0.000s
21.12.2012 12:11:38
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdb 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdc 0,00 0,00 0,00 531,00 0,00 240160,00 904,56 143,51 271,31 1,88 100,00
dbspace with 16K
21.12.2012 11:51:42
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdb 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdc 0,00 0,00 0,00 3993,60 0,00 121158,40 60,68 6,48 1,62 0,24 95,68
dbspace with 2K
21.12.2012 12:05:38
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0,00 0,60 0,20 2,20 0,80 11,20 10,00 0,01 3,67 0,67 0,16
sdb 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sdc 0,00 0,00 0,00 1994,80 0,00 61408,00 61,57 0,95 0,48 0,44 87,20
Confirmed with onstat -Rr |grep dirty
1436536 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets, 2048 buffer size
1284888 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
1130840 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
978344 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
832952 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
679240 dirty, 3600000 queued, 3600000 total, 4194304 hash buckets,
(1130840-978344) / 5 is 30.000 pages is 60 MB / sec
Bottom line is that 16k pagesize is doing twice as good as 2k despite the fact that it also has a 32 K buffer. Looking at onspaces it seems to have a 500 K buffer and does 4 times better as 2K.
My opinion is that this performance is no good and it is getting towards discussing 2 GB chunks yes/no, this is one of the old dogs in the engine needing a face lift. You may not choose to do so, then i guess it will get worse.
-->Don't forget that when you're "loading" you're reading ASCII and converting it. It will always be slower.
Yeah but i do not think a factor 6 is acceptable compared to writing to disk.
and it is worse when it uses the full capability of the i/o subsystem then
it is a factor 24!!!
See you
Superboer.
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
Jason Harris
2012-12-22 02:17:47 UTC
Permalink
Hi Superboer,

My 2c worth.

Loading through the buffer cache is very in-efficient. With one record per page, the engine must do this for each row:
1. read the page into the buffer cache
2. write the page before image to the physical log
3. write the page back to disk

Other tools, HP loader etc, are optimized for this type of load, and bypass the buffer cache altogether. It's during loads like this that the pages to be written are contiguous on disk, in normal operation they can be all over the place.

Some things that may help:
1. increase the physical log buffer size
2. if you have a volume manager e.g. LVM, turn off its read-ahead
3. change the file system block size
4. tune your storage for a smaller I/O size

Cheers,

Jason
s***@t-online.de
2012-12-22 08:51:46 UTC
Permalink
Hello Cesar, Jason,


Thanks for your comments see below how i generate checkpoints.

For one test i have a table that fits into the cache then i do a onmode -c
i used onspaces ( actually i was looking how long it took and what the io stats
were) to see what it could do (250MB/Sec) so unfortunatly above does not apply
neither does physical logging apply since it is a newly created dbspace each time (drop, rm, touch , chmod, recreate, create table with big initial extent, onmode -c then start the load.)

besides that the rootdbs containing the phys log(6 GB) is on sdb

When time allows i will try HPL(onpload and the new one.. )
Remember i have a dbexport which i wanted to optimize the effort to change the import is not worth the time spent to do so.

In the past i could get close to HPL loading times using the spl below which i needed anyway when the row was bigger then a page.


Superboer.

Remark: i only used it when i had one buffer cache of 2k Pages.

dbaccess sysmaster <<!
create procedure generatechkpt()
define dirty decimal (4,3);

while (1=1)

select ( sum(lru_nmod) / sum ( lru_nfree + lru_nmod ))
into dirty
from syslrus ;

if (dirty < 0.75 ) then
system "sleep 1";
else
system "onmode -c";
end if
end while ;
end procedure;

execute procedure generatechkpt();
!
Jason Harris
2012-12-23 04:41:03 UTC
Permalink
Hi Superboer,

My comments where more directed about your idea of increasing the maximum I/O in the engine. Which would only help in your specific case of loading, and there are already methods in the place for the specific case of loading.

You are correct that to get the best from HPL you need to unload with it.

You will also find that even brand new pages need their before images logged in the physical log. To increase that I/O size, increase the size of the physical log buffer.

I know that on the 64 bit version of Informix (not sure about 32bit), you can specify values less than one for the max LRU dirty, e.g. max_lru_dirty=0.75, which I think is similar to what your stored procedure is doing.

The best way to load dbexport data is to create the destination table as raw, and create an external table for the unload file, then copy it in.

HTH,

Jason
s***@t-online.de
2012-12-23 11:16:39 UTC
Permalink
Hello Jason,

do not get me wrong i really appriciate your help however
loading data into a new dbspace where pages are not initialized
does not require a page written to the physical log. onstat -l says so
when i am loading a quick test where i wrote 38000 pages tells me so:

onstat -l:

Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-2 0 32 11 2 5.50
phybegin physize phypos phyused %used
1:263 10000 8922 0 0.00

onstat -D:

address chunk/dbs offset page Rd page Wr pathname
26e4a958 1 1 0 4 21 /infdev/chunk1
26f2bd20 2 2 100000 0 0 /infdev/chunk1
26f19c30 3 3 150000 0 0 /infdev/chunk1
27a726f8 4 4 0 0 40012 /infdev/chunk2
4 active, 32766 maximum


this example is done on my own private engine




BTW the spl i use (generatechkpt()) starts a checkpoint when the buffer cache is 75% dirty not 0.75%

The point i am trying to make here is that there is room for improvement, it is
not critisism or anything else here. as far as i can see it writing data to disks with a max buffer of 32 KB was ok a decade ago back then one could outperform dd's to a filesystem, on aix i managed to write 50MB a second to 10 raw disks where the aix folks wrote only 40 MB a second. Nowadays disks are faster (onspaces uses 500K as i noticed)

I did not use Arts dbcopy yet, i am sure it is great. May IBM should use it to
improve dbimport/dbexport. That is also one point i tryed to make. improve the standard utilties to get a better product.

Superboer.
Jason Harris
2012-12-25 02:08:56 UTC
Permalink
Hi Superboaer, Thanks for that. Looks like you learn something new everyday. Maybe IBM should do what you say. Cheers, Jason
Art Kagel
2012-12-22 23:43:55 UTC
Permalink
Actually, Jason, #1 & #2 have to be done only for the first row added to an
existing page. Only #2 has to be done for the first row written to an
unused page (but the bitmap page for it has to be updated in memory). #3
only has to be performed once per page also unless your datasource is VERY
slow or your LRU_MAX_DIRTY is WAY too low.

I do agree that HP Loader and External Table loads in Express Mode are
considerably faster than an import from disk using dbimport (even a DELUXE
load using HP Loader or external tables are about 2x as fast as dbimport).
However, your analysis of the causes is flawed.

Most of the time, if the data being imported was originally exported from
another server, I find that the fastest way to make the copy is to use my
dbcopy utility to move the data directly from server to server. The actual
runtime of the copy is about the same as or faster than a deluxe mode
external load but you save the time to perform the export to disk, copy the
data to the target machine and read it all back in again. In addition, you
can break up the larger table copies using dbcopy's -s 'SELECT ..." feature
into multiple data streams and copy N time faster than a single threaded
export and import. In addition, many smaller tables can be copied in
parallel. Example: Recently at a client, We were timing an import from
export files (already exported and moved to the target) that was still
copying after 20 hours and would not have completed for another 30-36
hours. Killed it, and copied the entire database in under 4 hours using
dbcopy five small tables or parts of larger tables in parallel!

Art

Art S. Kagel
Advanced DataTools (www.advancedatatools.com)
Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions
and do not reflect on my employer, Advanced DataTools, the IIUG, nor any
other organization with which I am associated either explicitly,
implicitly, or by inference. Neither do those opinions reflect those of
other individuals affiliated with any entity with which I am affiliated nor
those of the entities themselves.
Post by Cesar Inacio Martins
Hi Superboer,
My 2c worth.
Loading through the buffer cache is very in-efficient. With one record per
1. read the page into the buffer cache
2. write the page before image to the physical log
3. write the page back to disk
Other tools, HP loader etc, are optimized for this type of load, and
bypass the buffer cache altogether. It's during loads like this that the
pages to be written are contiguous on disk, in normal operation they can be
all over the place.
1. increase the physical log buffer size
2. if you have a volume manager e.g. LVM, turn off its read-ahead
3. change the file system block size
4. tune your storage for a smaller I/O size
Cheers,
Jason
_______________________________________________
Informix-list mailing list
http://www.iiug.org/mailman/listinfo/informix-list
Jason Harris
2012-12-23 03:34:39 UTC
Permalink
Art, I said one row per page. His example has table "table_where_rowsize_1924_bytes", so when he as 2k page size its one row per page. When he has 16k page size there are more rows per page. Jason
Loading...