Discussion:
Max Rows Per Page, Reserve Page Gap, and Fill Factor
(too old to reply)
1dg618
2011-11-05 22:34:08 UTC
Permalink
I have been trying to find information on how to try to figure this
out without any luck.

What is the best way to figure out how to set the following based on
the locking schemes? Does everyone just use the defaults, which are
zeros?

I was reading how setting the max rows per page can reduce contention?

Locking Scheme:

All pages:

Max rows per page

Reserve page gap

Fill factor (0 - 100)

Data pages:

Expected row size

Reserve page gap

Fill factor (0 - 100)

?
Derek Asirvadem
2011-11-07 03:36:45 UTC
Permalink
Post by 1dg618
I have been trying to find information on how to try to figure this
out without any luck.
The Sybase manuals are not bad at all for that subject.

Probably best for you to please read the Sybase Fragmentation doc I
linked in my last answer. Then ask specific questions.
Post by 1dg618
What is the best way to figure out how to set the following based on
the locking schemes? Does everyone just use the defaults, which are
zeros?
In the old days, when we only had APL, yes, MAX_ROWS_PER_PAGE was a
great way of reducing content for locks. But there were many other
things that one would implement first, before addressing that. While
DPL/DRL reduces (it does not eliminate) the locking issue, it
introduces a horrendous new level of fragmentation, and you lose Range
Queries, etc. Invariably they cause more locks. So the high
performance or 24x7 shops still rely of APL.

Consistent with my last answer, you *either* have a Relational
database with Relational keys, which is most suited for APL, data
distribution, etc, and thus leave space for interspersed INSERTS, via
FILLFACTOR and RESERVEPAGEGAP. As to the values, it depends on how
much space you can afford to reserve, and how often you plan to de-
fragment the CI. I do that once every two years, but that means I
plan the values on a table basis.

*Or else* you have a record filing system implemented in DPL/DRL
tables. These have a Heap, and new INSERTS are added to the end of
the Heap. You can use partitions and then it round-robins the INSERTS
across partitions. Nowhere near the rows that have similar PKs or
whatever you specified in the Placement Index. Sure, the
fragmentation can be slightly mitigated via FILLFACTOR and
RESERVEPAGEGAP, but there is no point in it, since these tables demand
weekly de-fragmentation anyway. Since they do not have PageChains,
the relevance of those parameters is reduced; FreeSpace tracking is
quite different.

One thing that really makes a difference in performance in several
areas, here to eliminate certain types of fragmentation, is to ensure
your rows are fixed length. That means no nullable columns.

No, I never let them default. But then I manage space much more
carefully than most sites, with a view to eliminating (not merely
reducing) fragmentation and space management problems. Eg, I use
Segments to separate contentious tables and to distribute data. In
this game, the maxim is, if you look after the pounds, the pennies
will look after themselves; the law of diminishing returns applies.
So it is a waste of time expending large effort, and regularly, fixing
the lower level space problems. If you handle the higher level space
management, once, the lower level issues are irrelevant or
eliminated. No amount of fiddling at the lower levels is going to
return the precious Asynch Pre Fetch and Large I/O; if you work at the
higher levels, you never lose it in the first place.

Regards
Derek
1dg618
2011-11-08 01:45:43 UTC
Permalink
Post by Derek Asirvadem
Post by 1dg618
I have been trying to find information on how to try to figure this
out without any luck.
The Sybase manuals are not bad at all for that subject.
Probably best for you to please read the Sybase Fragmentation doc I
linked in my last answer.  Then ask specific questions.
Post by 1dg618
What is the best way to figure out how to set the following based on
the locking schemes? Does everyone just use the defaults, which are
zeros?
In the old days, when we only had APL, yes, MAX_ROWS_PER_PAGE was a
great way of reducing content for locks.  But there were many other
things that one would implement first, before addressing that.  While
DPL/DRL reduces (it does not eliminate) the locking issue, it
introduces a horrendous new level of fragmentation, and you lose Range
Queries, etc.  Invariably they cause more locks.  So the high
performance or 24x7 shops still rely of APL.
Consistent with my last answer, you *either* have a Relational
database with Relational keys, which is most suited for APL, data
distribution, etc, and thus leave space for interspersed INSERTS, via
FILLFACTOR and RESERVEPAGEGAP. As to the values, it depends on how
much space you can afford to reserve, and how often you plan to de-
fragment the CI.  I do that once every two years, but that means I
plan the values on a table basis.
*Or else* you have a record filing system implemented in DPL/DRL
tables.  These have a Heap, and new INSERTS are added to the end of
the Heap.  You can use partitions and then it round-robins the INSERTS
across partitions.  Nowhere near the rows that have similar PKs or
whatever you specified in the Placement Index.  Sure, the
fragmentation can be slightly mitigated via FILLFACTOR and
RESERVEPAGEGAP, but there is no point in it, since these tables demand
weekly de-fragmentation anyway.  Since they do not have PageChains,
the relevance of those parameters is reduced; FreeSpace tracking is
quite different.
One thing that really makes a difference in performance in several
areas, here to eliminate certain types of fragmentation, is to ensure
your rows are fixed length.  That means no nullable columns.
No, I never let them default.  But then I manage space much more
carefully than most sites, with a view to eliminating (not merely
reducing) fragmentation and space management problems. Eg, I use
Segments to separate contentious tables and to distribute data.  In
this game, the maxim is, if you look after the pounds, the pennies
will look after themselves; the law of diminishing returns applies.
So it is a waste of time expending large effort, and regularly, fixing
the lower level space problems.  If you handle the higher level space
management, once, the lower level issues are irrelevant or
eliminated.  No amount of fiddling at the lower levels is going to
return the precious Asynch Pre Fetch and Large I/O; if you work at the
higher levels, you never lose it in the first place.
Regards
Derek
Thank you. I have a long way to go.
Derek Asirvadem
2011-11-11 22:52:10 UTC
Permalink
Post by 1dg618
Thank you. I have a long way to go.
My pleasure.

"A journey of a thousand miles begins with a single step" Lao Tzu. You
are well on your way.

Regards
Derek

Loading...