Hill backup schedule

August 17, 2013

The Hill backup system was described in a prior posting. It’s main purpose is to provide roll-back to a known system state in case of complete loss in Ungar. For this reason, it uses dump to provide an block-accurate disk image, including hard links and sparse files and does not modify touch times during backup.

The backups are incremental with the current pattern of a zero level at start of epoch, and four level 4′s at the start of each month and four weekly or semi-weekly level 8′s.

Sunday Monday Tuesday Wednesday Thursday Friday Saturday
11 PM Pickett
3 AM Beauregard Bragg Meade Davis Mcclellan Meade Davis

System backups based on snapshots incidentally serve other purposes, for instance, to recover user files that have been corrupted or deleted. Past snapshots are saved with an accuracy that coarsens with receding time.

It is often unacknowledged that no backup scheme fits all needs. For instance, archive and source-control contrast to snapshots. Snapshots are meant to be mechanical and precise. Archive is permanent, deliberate, and indexed; Source control keeps versions, including deleted files, and possibly renaming and file moves, and has collaboration mechanisms.

Beauregard renovations

August 16, 2013

Beauregard has been updated to Ubuntu 12.04-amd64 LTS , from Debian. Beauregard now provides Mysql, NFS and backup services to the blades.

For backup, the blades are sandboxed, we generally do not snapshot the system; user data should be either on the NFS mounted partition, in the mysql database, or on our subversion server. The blade is a compute engine, treat it as interchangeable and quickly re-installable.

  • NFS Howto:On beauregard:
    mkdir /opt/cluster/nfs/${BLADE} ;\
    echo "/exp/blades/${BLADE} ${BLADEIP}(rw,sync,no_root_squash,no_subtree_check)"\
    >> /etc/exports ;\
    /usr/sbin/exportfs -r

    On {$BLADE}:
    echo "beauregard:/exp/blades/${BLADE} /opt nfs soft,intr 0 0" >> /etc/fstab ; mount -a

    See also: beauregard:/opt/cluster/nfs/Makefile; note symlink from /exp/blades to /opt/cluster/nfs.

  • Mysql Howto:On beauregard:
    create database ${DATABASE};
    grant all privileges on ${DATABASE}.* to '${USER}'@'${BLADEIP}' \
    identified by '${PASSWORD}' with grant option ;

    On ${BLADE} with IP ${BLADEIP}:
    mysql -u ${USER} -p -h beauregard.cs.miami.edu
    use ${DATABASE};

Private local numbering

June 22, 2013

The internal IP addresses of the computer science machines are from the 172.16.0.0/12 address block which is then subnetted by security level to an overall 172.16.0.0/16 address. Currently there are four security levels – public, sandbox, lab and servers, but this summer we will collapse public and sandbox into a single level.

prefix

subnet

IP

level

172.16.0.0

0.2.0.0

172.18.0.0

public

172.16.0.0

0.3.0.0

172.19.0.0

lab

172.16.0.0

0.4.0.0

172.20.0.0

servers

172.16.0.0

0.5.0.0

172.21.0.0

sandbox

Online backups

August 19, 2012

Over the past year the department has improved backups with respect to:

  • Accessibility, but having on-line backups to large capacity backup servers;
  • Diversity, locating one of two backup remotely, to reduce site-risk;
  • Security, with end-to-end encryption so that the remote server does not have the data nor keys;
  • Correctness, by making sure the data is in a consistent state throughout the backup.

The problem of backup is more nuanced that it first seems. While most users want from a backup the restoration of a deleted file, the aim of a backup is to restore a machine to the state it was in at a certain point in time. These time points are, for instance, nightly for the past week, weekly for the past month or so, and then several monthly or semi-monthly going back for whatever number of years desired. That this might include the ability to restore a user file is somewhat coincidental.

In default of a separate backup mechanism more appropriate for user files, the response is to segregate user data from system data, and to make sure the periodic backup system has appropriate spacing, retention and technology independence for the purposes of user data.

The accessibility, diversity and security requirements are met using a remote machine, with a standard dump piped first though openssl encryption and then through ssh to the remote machine.

Correctness requires consideration of the particular services. Using filesystem snapshots, dump can now have a consistent view of the filesystem frozen at a point in time. FreeBSD supports snapshots natively in UFS, and Linux supports it universally using LVM. A mysql database can be correctly backed-up using mysqlhotcopy to properly duplicate the files that contain the database, properly locked and flushed to insure consistency. Subversion has svnadmin hotcopy that does the same for subversion.

While this should settle the matter for filesystems and subversion, it does seem possible that an improperly programmed database application might split a transaction, and the hotcopy might occur at the split.

Moving WordPress across servers

August 19, 2012

To move a wordpress to a new server requires that three objects get moved:

  1. The wordpress application;
  2. The database of user content;
  3. and the HTTP configuration.

On a fresh Red Hat derived Linux, the required yum installs are: mysql-server, mysql, httpd, php, php-mysql. WordPress is installed via tar, as a php application there is no build required. To move the wordpress installation tar this and untar it at the destination. Change owner and group to apache. If this is from a fresh install of wordpress, first untar the distribution then overlay wp-config.php and the entire directory wp-content.

To move the database, use mysqlhotcopy to make a copy of the files in, say, /var/lib/mysql/_database_name_ into a suitable location. say /opt/backup/mysql/_database_name_.

bash> mysqlhotcopy -u root -p PASS --allowold wildpages /opt/backup/mysql

Note that you probably want to delete all unapproved comments (spam) before doing the backup, as they can total the largest amount of data in your database. This can be done by mysql as:

mysql> DELETE FROM wp_4_comments WHERE comment_approved = '0';

The cloned directory can be tar’ed and moved, careful to stop the mysql server during the untar. Finally the new owner has to be introduced to the mysql system by:

mysql> GRANT ALL PRIVILEGES ON _database_name_s.* to "_database_name_"@"localhost" \
identified by "mypassword";
mysql> flush privileges;

Modify the apache config, copying in the old config.

Welcome Bragg

August 19, 2012

Bragg is now up and running, thanks to Irina, Bragg is a Dell R510 running a single Intel E5620 2.4 Ghz, 24G 1333MHz dual ranked RDIMM’s; a PERC H700 Raid, and 600 GB/2 TB mirrored SAS.

It will be taking on the responsibilities of Davis.

We have moved to CentOS, because UM is not longer purchasing a site license with Red Hat, and Fedora, Red Hat’s free release, rev’s very quickly. That keeps us too busy chasing rev’s.

Welcome Hill

June 11, 2012

Welcome our newest server, hill.physics.miami.edu (CNAME hill.cs.miami.edu). The R510 Dell is located in the physics department server room, and runs Ubuntu. It’s purpose is to hold backups of other cs machines, offsite in case of a room-wide emergency. Running Ubuntu 10.04.4 LTS we are good to go until April 2015. (We missed 12.04 LTS by a day! Else we wouldn’t need a major upgrade until April 2017).

P.S. Mysqld is crashing on blog.cs.miami.edu on the cloud. Looks like out of memory. Perhaps a micro edition just isn’t enough.

Migration to the cloud

June 11, 2012

In preparation to update meade, the services svn.cs.miami.edu, blog.cs.miami.edu, and aigames.cs.miami.edu have been moved to instances on AWS. Only web.cs.miami.edu remains on meade. web.cs.miami.edu/aigames will have a permanent redirect to aigames.cs.miami.edu.

/home homepages

August 16, 2011

www.cs.miami.edu/~[username] is now supplemented with www.cs.miami.edu/home/[username].

This change allows [username]‘s homepages to be addressed by both style URL’s.

Your account was created with a symbolic self-link “public_html->.” in you public_html directory. This self-link means that the new URL scheme will work along side the old URL scheme without any change to your homepages.

If one or the other URL is not working, it is easy enough to fix.

blog.cs.miami.edu is moving …

August 3, 2011

WordPress for the department is moving to a new computer. Using the new “Network Admin” possibilities of WordPress 3.0. It is a big improvement.

Some tricks for wordpress users:

To move a site, it helps to change the url’s in the database:

mysql> update wp_options set option_value='http://jackson.cs.miami.edu/burt' where option_name='siteurl';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> update wp_options set option_value='http://jackson.cs.miami.edu/burt' where option_name='home';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

If you don’t do this, you get some pages to work, but then it will jump to the URL as described by these values. This worked pretty good on the old WordPress, and might work on WordPress 3.0 without Network Admin enabled. I couldn’t migrate the URL of a 3.0 with Network Admin and ended up dropping the entire database and starting again.

In general, it is hard to change the URL of a wordpress site. When migrating a site there just has to be downtime – export the data and shut down the old site; fix the DNS and install at the new machine on the desired URL.

To set the password, find a working user_pass from another blog, with known password:

mysql> update wp_users set user_pass='$P$BV3q/rYU98poi/Zv/ARuStuPi.Wvad.' where user_login='admin';
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org