2012-10-31

Tablespace management with PostgreSQL database

← Older revision

Revision as of 14:53, 31 October 2012

(11 intermediate revisions by one user not shown)

Line 3:

Line 3:

== Clustering and high availability ==

== Clustering and high availability ==

+

=== Purpose ===

+

Patching your systems on time is a very important. This document helps with the suggestions how to improve high availability of SUSE Manager appliance.



=== Status: important! ===

+

'''
IMPORTANT
:
''' This page contains only friendly hints that can be used when considering cluster setup
of SUSE Manager and
this information
is '''
not an officially supported manual'''.



'''
WARNING
:
As
of SUSE Manager
v1.7
and
earlier, nor Oracle neither PostgreSQL cluster setup
is
unsupported. If customers still wants to do that, it is exclusively ON THEIR OWN RISK AND RESPONSIBILITY.
'''

+



This page contains only friendly hints that can be used when considering cluster setup of SUSE Manager and this information is '''not an officially supported manual'''. Please use
these suggestions on your own risk and responsibility.

+

Use
these suggestions on your own risk and responsibility.



===
Stand-by setup
===

+

===
Limitations
===

+

'''WARNING: As of version 1.7 and older, the SUSE Manager is not cluster-aware ''application''. That means that itself it cannot be installed on a several nodes and then joined together as one piece of software. This limitation applies to all product versions and combinations. If customers still wants to do that, it is exclusively ON THEIR OWN RISK AND RESPONSIBILITY.'''



This is nor cluster neither grid setup, but just high availability. In this setup two SUSE Manager appliances are installed as active and passive. Active is what is mainly used, passive is trying to be synchronized as close as possible to the active. Disaster assumes that only one can be down at the time, while another one becomes an active appliance. After the restoration works has been done, restored appliance supposed to be synchronized to the current status.

+

+

== Stand-by setup ==

+

+

This is nor cluster neither grid setup, but just
''
high availability
'' setup, where SUSE Manager application will be available in ''partial'' disaster
. In this setup two SUSE Manager appliances are installed as active and passive. Active is what is mainly used, passive is trying to be synchronized as close as possible to the active. Disaster assumes that only one can be down at the time, while another one becomes an active appliance. After the restoration works has been done, restored appliance supposed to be synchronized to the current status.

To do so, please refer to RedHat Satellite documentation: http://www.redhat.com/f/pdf/rhn/Satellite-HA.pdf

To do so, please refer to RedHat Satellite documentation: http://www.redhat.com/f/pdf/rhn/Satellite-HA.pdf

Line 20:

Line 25:



=
== Redundant storage
=
==

+

== Redundant storage ==

As the SUSE Manager is based on Spacewalk, it acquired its inability to perform on a real cluster and thus is not cluster-aware application. Therefore for High Availability right now '''the only way''' is to put database ''tablespace'' on redundant shared storage and make sure data stays there rock-solid.

As the SUSE Manager is based on Spacewalk, it acquired its inability to perform on a real cluster and thus is not cluster-aware application. Therefore for High Availability right now '''the only way''' is to put database ''tablespace'' on redundant shared storage and make sure data stays there rock-solid.

* '''SUSE suggests:''' ''It is better to install SUSE Manager on a virtual machine and snapshot the installed working image every time it gets updated itself. Then when SUSE Manager node fails (hardware failure), the same image can be fired up in minutes on different hardware, using HA. That said, instead of by HA SLES extension take care of particular component, say Tomcat or OSAD within the SUSE Manager, it is better to take care the entire virtual machine is running. So if a virtual machine "A1" fails on box "A", SLES HA Extension will start a virtual machine "B1" on a physical box "B" from the same snapshot of the same identical virtual machine.''

* '''SUSE suggests:''' ''It is better to install SUSE Manager on a virtual machine and snapshot the installed working image every time it gets updated itself. Then when SUSE Manager node fails (hardware failure), the same image can be fired up in minutes on different hardware, using HA. That said, instead of by HA SLES extension take care of particular component, say Tomcat or OSAD within the SUSE Manager, it is better to take care the entire virtual machine is running. So if a virtual machine "A1" fails on box "A", SLES HA Extension will start a virtual machine "B1" on a physical box "B" from the same snapshot of the same identical virtual machine.''

+

+

=== Possible scenario ===

+

+

Below is a summary example of the suggestion above to withstand:

+

+

# Install two SLES instances on a physical hardware.

+

# Install SLES HA extension on both of them (sold separately).

+

# Install SUSE Manager on a virtual machine, say KVM.

+

# Save KVM virtual machine image on reliable storage.

+

# Let HA extension start the virtual machine on inactive (passive) node, once disaster occurs.

=== SLES HA extension ===

=== SLES HA extension ===



The
only way that would help here is to automate active/passive scenario. HA extension for SLES will automatically start the rest of the services on passive machine. However, the database needs to be shared. Basically this works in the following way:

+

Sinse SUSE Manager is not cluster aware ''application'', therefore the
only way that would help here is to automate active/passive scenario. HA extension for SLES will automatically start the rest of the services on passive machine. However, the database needs to be shared. Basically this works in the following way:

client

client

Line 48:

Line 63:

=== Scaling ===

=== Scaling ===



If you want just load balance it across more HW and locations, it is

+

SUSE Manager
can handle
about
30,000
servers with a fairly reasonable performance
.
However
, traffic
for package repositories could be an issue
.



much better and simple solution to use one or more Spacewalk Proxies in

+



front of Spacewalk Server. Spacewalk Server
can handle
more then
30,000

+



machines for sure
.
But we recommend to use One Spacewalk Proxy per each

+



5
,
000 server to offload some
traffic
from Spacewalk Server
.

+



=== Database Sizing ===

+

* '''SUSE suggests:''' ''It is a very good idea to setup SUSE Manager Proxy per each 5,000 server to offload server traffic from the SUSE Manager in case of packages transfer. This will not add processing performance, but will decrease traffic.''



From [http://docs.redhat.com/docs/en-US/Red_Hat_Network_Satellite/5.4/html/Installation_Guide/s1-requirements-database.html RHN Satellite documentation]

+

== Database Sizing ==



A single 6 GB tablespace is recommended as more
than
sufficient
for most installations.
It is possible for many customers
to
function
with a
smaller
tablespace.
An experienced
Oracle database
administrator (DBA) will be necessary to assess sizing issues. The following formula should be used to determine
the
required size
of
your database
:

+

While it may be used less
than
that, the SUSE Manager requires to make sure '''25GB''' free disk space is available
for most installations.
And that still depends on the ''initial load'', which can grow easily grow up
to
'''terabyte''', if there are many transactions. However, as a rule of thumb, 25GB should be fine.



*
192 KB per client system

+



*
64 MB per channel

+

=== Tablespace management
with
Oracle database ===

+

Tablespace management is
a
critical part of database. This section describes
tablespace
-related enhancements
.

+

+

In
Oracle database the
space is usually allocated after a scanning
of
a bitmap in a header of the data file. There are two types of managed tablespaces that are managed locally. These types are different in the implementation
:

+

+

*
Uniform extent allocation. This method uses extents of the same size for all objects within the tablespace.

+

+

*
Automatic extent allocation. This method uses a set of extent sizes that are factors of each other, beginning with 64K and moving upward through 1MB, 8MB, and 64MB.

+

+

SUSE Manager as of version 1.7 is using second method to make sure system administrators spend more time with their families, rather then looking after Oracle software. :-) But because of this method, SUSE Manager easily can grow very quickly its database, especially on init run. The database size is not always fixed and space is reclaimed back, archive logs are purged after each backup etc. However there should be enough space to handle a big load of servers, since each transaction also grows space in the archive log until backup is taken.

Keep in mind, the database storage needs may grow rapidly, depending upon the variance of the following factors:

Keep in mind, the database storage needs may grow rapidly, depending upon the variance of the following factors:

+

* The number of public Vendor packages imported (typical: 5000)

* The number of public Vendor packages imported (typical: 5000)

* The number of private packages to be managed (typical: 500)

* The number of private packages to be managed (typical: 500)

* The number of systems to be managed (typical: 1000)

* The number of systems to be managed (typical: 1000)

* The number of packages installed on the average system (typical: 500)

* The number of packages installed on the average system (typical: 500)

+

Although you should be generous in your database sizing estimates, you must consider that size affects the time to conduct backups and adds load to other system resources. If the database is shared, its hardware and spacing are entirely dependent on what else is using it.

Although you should be generous in your database sizing estimates, you must consider that size affects the time to conduct backups and adds load to other system resources. If the database is shared, its hardware and spacing are entirely dependent on what else is using it.

+

+

'''SUSE suggests:''' ''Put database tablespace on the disk space that can be resized (LVM or BTRFS, ZFS — depends on the storage vendor).''

+

+

+

=== Tablespace management with PostgreSQL database ===

+

+

Despite of its huge elephant logo, PostgreSQL database is still few orders of magnitude less eager for the disk space than its "red colleague" from the California, and thus hardly exceeds 10GB of the disk space even of pretty big amount of handled servers.

+

+

'''SUSE suggests:''' ''Put database tablespace on the disk space that can be resized (LVM or BTRFS, ZFS — depends on the storage vendor).''

+

+

== Database management ==

+

+

SUSE Manager as of version 1.7 has new feature, called SMDBA. The '''SMDBA''' is SUSE Manager database control tool and replaces [http://ia.media-imdb.com/images/M/MV5BMzU3MDE3MTc2NF5BMl5BanBnXkFtZTcwMjU1MTcwNA@@._V1._SX640_SY414_.jpg RedHat's "Dobby"]. This tool is developed to provide the same interface for Oracle and PostgreSQL databases. The set of commands may differ, since database engines are really different and not everything what is in Oracle is in PostgreSQL (and vice versa). SMDBA is used to take hot backups, restore from the complete disaster, check available space, start, stop or restart database etc.

+

+

'''IMPORTANT: Do NOT use SMDBA as database clone tool!''' If you want to setup several same SUSE Manager instances, setup it on a virtual machine and simply reuse the image.

+

+

To install SMDBA, simple issue Zypper command the following way:

+

+

sudo zypper install smdba

+

+

After package is installed, please refer to the manual page:

+

+

man smdba

+

+

Please refer to the online documentation of SUSE Manager for more information regarding the SMDBA.

Show more