This article is an extension of the previous article about storing sessions in Memcached with PHP.
Using Memcached for sessions storage is generally a matter of speed (storing them in memory is faster than on disk) and of scalability (using a Memcached server allows you to have several web servers serving the same PHP application in a seamless way).
In the previous article, we mentioned (3 years ago) that the php5-memcached extension did not seem to manage the storage of sessions quite well, or at least that the configuration of such setup was not well documented. This time, we’ve been able to do it with php5-memcached. This time also, we’re talking about distributing Memcached on several servers. Typically, in a cloud environment with several web servers and no dedicated memcached server, or a need for some kind of fail-over solution for the memcached server. We’re setting this up on 3 Ubuntu 14.04 servers in 64bit that act as load-balanced web servers behind a load balancer managed by Nginx. If you use another distribution or operating system, you might need to adapt the commands we’re showing here.
Installing Memcached + PHP
So the few commands you need to launch first are to install the required software, on the 3 web servers (you can either do that on one of them then replicate the image if you are using Cloud instances, or you can install simultaneously on your 3 web servers using ClusterSSH, for example, with the cssh server1 server2 server3 command) :
sudo apt-get install memcached php5-memcached sudo service apache2 restart
The second command is to make sure Apache understands the php5-memcached extension is there.
In order to enable the connection to Memcached from the different load-balanced servers, we need to change the Memcached configuration to listen on the external IP address. Check the IP address with /sbin/ifconfig. The Memcached configuration file is located in /etc/memcached.conf. Locate the “-l” option and change “127.0.0.1” for your external IP, then save and close the file. Note that this might introduce a security flaw, where you are possibly opening the connection to your Memcached server to the outside world. You can prevent that using a firewall (iptables is a big classic, available on Ubuntu)
Next, restart the Memcached daemon:
sudo service memcached restart
Now you have a Memcached server running on each of your web servers, accessible from the other web servers. To test this, connect to any of your web servers and try a telnet connection on the default Memcached port: 11211, like so:
user@server1$ telnet ip-server2 11211
To get out of there, just type “quit” and Enter.
OK, so now we have 3 Memcached servers, we only need to wrap up configuring PHP to use these Memcached servers to store sessions.
Configuring PHP to use Memcached as session storage
This is done by editing your Apache VirtualHost files (on each web server) and adding (before the closing </VirtualHost> tag) the following PHP settings:
php_admin_value session.save_handler memcached php_admin_value session.save_path "ip-server1:11211,ip-server2:11211,ip-server3:11211"
Now reload your web server:
sudo service apache2 reload
You should now be able to connect to your web application using the distributed Memcached server as a session storage (you usually don’t need to change anything in your application itself, but some might exceptionally define their own session storage policy).
The dangers of using a distributed Memcached server
Apart from the possible open access to your Memcached server previously mentioned, which is particularly security-related, you have to take another danger, mostly high-availability related, into account.
When using a distributed Memcached server configuration, it is important to understand that it works as sharded spaces configuration. That is, it doesn’t store the same sessions over on the various available Memcached server. It only stores each single session in one single server. The decision of where it will store the session is out of the context of this article, but it means that, if you have 300 users with active sessions on your system at any one time, and one of your web servers goes down, you still have 2 web servers and 2 Memcached servers, but ultimately around 100 users will loose their session (that was stored on the web server that went down).
Worst: the PHP configuration will not understand this, and still try to send sessions to the server that was considered to hold these 100 sessions, making it impossible for the users to login again until the corresponding Memcached server is back up (unless you change the configuration in your PHP configuration).
This is why you have to consider 2 things, and why this article is just one step in the right direction:
- you should configure the Memcached servers from inside your application for the sessions management (as such, you should have a save_handler defined inside it and check for the availability of each server *before* you store the session in it)
- if your sessions are critical, you should always have some kind of data persistence mechanism, whereby (for example), you store the session in the database once every ten times it is modified
We hope this was of some use to you in understanding how to use Memcached for sessions storage in PHP. Please don’t hesitate to leave questions or comments below.
If you’re in a hurry/on speed, know this:
- this procedure is slightly more difficult (so longer) than installing the charm on Amazon
- you can skip directly to “Installing Juju”
- if you already have juju installed, you can skip to the last 2 lines of the “Installing juju” section
- if you already have juju-docean installed and configured, you can skip directly to “Provisioning VMs”
- otherwise, just continue reading, it’s worth a few minutes…
This tutorial regroups a lot of advanced notions, so if you want to know more about one of the following elements, please follow these links:
Before anything else, please note that the following is highly experimental. There are still a series of issues that should be worked out in order to make this process failproof.
Before we start using commands and stuff, you’ll have to note the following:
- We are using a Chamilo Charm developed by José Antonio Rey (kudos to him) as a voluntary contribution to the project
- Charms are configurations to install applications (and stuff) inside the Juju framework
- The Juju framework is developed by the Ubuntu team, so we’re using an Ubuntu (14.04) desktop (or in this case laptop) to launch all the following
- Digital Ocean is one cloud hosting provider, which is particularly cheap and good for development purposes. The “default” environment for Juju is Amazon, so we’ll have a few additional steps because of this choice. The Digital Ocean plugin to Juju is developed by geekmush on Github, and as far as I know he is not related to either Ubuntu nor Digital Ocean, so he is also worth praising for his contribution
- Chamilo requires a web server and a database server. In this Charm, it is assumed that we want both of these on separate virtual machines, so you will need two of them (unless you change the parameters a little)
- Juju is written in Go but relies on several Python libraries. As such, you’ll have to have python installed on your system and maybe Juju will shout because it is missing a few dependencies. Notably, I installed python3-yaml to avoid a few warnings (it is required for the following, although the installer for Juju says it’s optional)
On a default Ubuntu desktop installation, you’ll have to install Juju first. Because we are going to use Juju connected to Digital Ocean, we need a recent version of Juju, so let’s add it the unconventional way (with the ppa), launching the following on the command line:
sudo add-apt-repository ppa:juju/devel sudo apt-get update && apt-get install juju juju version
For some reason, in my case, this created my home directory’s .juju/ folder with root permissions, which then prevented me to reconfigure my environment (requirement for the Digital Ocean Juju plugin), so I changed permissions (my user is “ywarnier”, so change that to your user):
sudo chown -R ywarnier:ywarnier .juju
Then we need to install the juju-docean plugin:
sudo apt-get install python3-yaml sudo pip install -U juju-docean
Setting up Digital Ocean access
Now we need to configure our Digital Ocean (D.O.) API so the system will be able to call D.O. in our place and create instances (and stuff).
You first need to grab your API key, client ID and SSH key ID from the Digital Ocean interface. You can do that from the Digital Ocean API page. Obviously, you need a Digital Ocean account to do this and a few bucks of credit (although you can get $10 free credit from several places). If your API key says “Hidden”, that’s because you must have it stored somewhere already (for other services?). If you don’t, you’ll have to re-generate one. Your SSH key ID is the name you gave to the SSH key you use from your computer to connect to your new instances. If you don’t have it, that’s probably because you haven’t configured any. Please do in the “SSH Keys” menu item on the left side of your D.O. panel.
export DO_CLIENT_ID=aseriesof21alphanumericalcharacters export DO_SSH_KEY="user@computer" export DO_API_KEY=aseriesof32characters
Setting up the Digital Ocean Juju environment
Now we need a bit of manual config to be able to use Digital Ocean (last bit, promised). Edit the ~/.juju/environments.yaml file and paste the following:
environments: digitalocean: type: manual bootstrap-host: null bootstrap-user: root
Just a note: the “type: manual” line implies it is a bit more complicated than on amazon later on, and we will have to launch a few more commands to provision new machines *before* we deploy Chamilo.
Generating the Juju environment
Now we’re going to create our Juju controller. The Juju controller can be an independent Virtual Machine (VM), or it can be the same as the one on which you will deploy Chamilo. It all depends on your budget and your requirements.
juju docean bootstrap --constraints="mem=1g, region=nyc1" 2014/06/22 11:50.24:INFO Launching bootstrap host 2014/06/22 11:51.29:INFO Bootstrapping environmen
Note that we took a decision to use a 1GB (RAM) VM here (mem=1g), in a datacenter in New York (region=nyc1). For the record, I tried creating them in nyc2, which is also a valid D.O. datacenter, but it failed miserably (sometimes not creating the VM, sometimes creating it without IP, sometimes creating it fully, but in the end never returning with a proper success response for my environment to be created), so sticking to nyc1 is probably a reasonable time-saver.
To be able to deploy Chamilo, we’ll use two VMs: one for the web server and one for the database
juju docean add-machine -n 2 --constraints="mem=1g, region=nyc1" 2014/06/22 12:44.59:INFO Launching 2 instances 2014/06/22 12:46.42:INFO Registered id:1908893 name:digitalocean-8d14c9bc671555ff872d8d6731f84d68 ip:184.108.40.206 as juju machine 2014/06/22 12:49.08:INFO Registered id:1908894 name:digitalocean-a9ba29cfe55549f58e5f7e365199c5ed ip:220.127.116.11 as juju machine
Now, the “-n 2” above allows you to create these 2 instances, but you could also launch 2 different instances of different properties, doing it one by one. In our case, I suggest you use version Trusty of Ubuntu for the MySQL machine, to avoid a little bug in the Precise version of the charm:
juju docean add-machine --constraints="mem=2g, region=nyc1" juju docean add-machine --series=trusty --constraints="mem=1g, region=nyc1"
The important thing here being that you can later identify the machine itself by a simple ID, using juju status:
juju status environment: digitalocean machines: "0": agent-state: started agent-version: 1.19.3 dns-name: 18.104.22.168 instance-id: 'manual:' series: precise hardware: arch=amd64 cpu-cores=1 mem=994M state-server-member-status: has-vote "1": agent-state: started agent-version: 1.19.3 dns-name: 22.214.171.124 instance-id: manual:126.96.36.199 series: precise hardware: arch=amd64 cpu-cores=1 mem=994M "2": agent-state: started agent-version: 1.19.3 dns-name: 188.8.131.52 instance-id: manual:184.108.40.206 series: trusty hardware: arch=amd64 cpu-cores=1 mem=994M
If you made a mistake at some point or just wanna try things out, you can destroy these instances with
juju docean terminate-machine 1
where “1” is the ID of the machine, as shown above before each of them.
Now we’ve got our machines, we just need to deploy the Chamilo Charm and the MySQL Charm (you need MySQL to run Chamilo):
juju deploy cs:~jose/chamilo --to 1 juju deploy mysql --to 2
Please note that the “–to n” option is to specify on which machine you want to deploy the selected service.
Now, we need to configure Chamilo a little. We’re going to give it a domain name (you’ll have to redirect this domain name to the IP of the first machine – the one with the Chamilo service – in order to use it when ready) and a password for the “admin” user (the user created by default):
juju set chamilo domain=test.chamilo.net pass=blabla
Now we still need to tell Juju to link the Chamilo service with the MySQL service:
juju add-relation chamilo mysql
And finally, apply all the above and expose the chamilo service to the public:
juju expose chamilo
If something goes wrong with a service, you can always remove it with:
juju destroy-service chamilo
You can replace “chamilo” by the service with which you are having the issue, of course. If that doesn’t work out, you can always remove (terminate) the machine itself (see above).
You can connect at any time to any of your virtual machines through the command
juju ssh chamilo/0
where “chamilo/0” is the name appearing below “units” in your services.
You can check the status of all your instances with
Note that, sometimes, you might end up with dozens or hundreds of instances. In this case, it won’t be as practical to show the status of all instances (I have no solution for that now, but I’m sure there is a way to filter the results of a juju status).
You can launch a command on the virtual machines’ command line like this:
juju run --service chamilo "tail /var/log/juju/unit-chamilo-0.log"
This way, you are actually executing the command remotely and getting the results locally.
You can also see the error log locally, connecting in SSH (first) and then launching:
Obviously, that gives you a little more flexibility.
Notes about unexpected errors
One of the “silent” things is that Juju considers the default machine to be Ubuntu Precise. In the case of MySQL, the default Charm is configured for Trusty. This means that if you want to install this package, you need to install a virtual machine in Trusty. Otherwise, you might get some other issues. In my case, the Precise Charm didn’t really work (missing yaml), so I decided to go for Trusty.
You can choose the distribution of your machine with –series=trusty, for example:
juju docean add-machine --series=trusty --constraints="mem=2g, region=nyc1"
We tested the chamilo charm relatively extensively.
Unmounting the whole thing
If this was just a test, and you’re happy, maybe you want to remove everything. If so, the quickest way to do that is to launch a destroy-environment command, but you will first need to destroy each machine and, before that, each services that :
juju destroy service chamilo mysql juju destroy machine 1 2 juju destroy-environment digitalocean
This should reasonnably quickly remove the whole setup.
You should still check your Digital Ocean’s dashboard, though, as apparently it doesn’t always delete the nodes you created with Juju…
Quick commands list for the impatient
Assuming you’re running Ubuntu 14.04 and that you know which values to change in the commands below:
sudo add-apt-repository ppa:juju/devel sudo apt-get update && apt-get install juju sudo chmod -R 0700 .juju sudo apt-get install python3-yaml sudo pip install -U juju-docean export DO_CLIENT_ID=aseriesof21alphanumericalcharacters export DO_SSH_KEY="user@computer" export DO_API_KEY=aseriesof32characters juju docean bootstrap --constraints="mem=1g, region=nyc1" juju docean add-machine --constraints="mem=2g, region=nyc1" juju docean add-machine --series=trusty --constraints="mem=1g, region=nyc1" juju deploy cs:~jose/chamilo --to 1 juju deploy mysql --to 2 juju set chamilo domain=test.chamilo.net pass=blabla juju add-relation chamilo mysql juju expose chamilo
And connect your browser to test.chamilo.net (that you must have redirected to the corresponding IP first) and login with admin/blabla.
Si necesitas una partición adicional en un servidor en la nube ó necesitas particionar en caliente (mientras tu pc ó servidor está corriendo) entonces te será muy útil lo siguiente.
En Linux es posible usar un loop device como una partición virtual, para ello se apoya en el módulo loop dentro del kernel, el cual está disponible dentro de la mayoría de distribuciones. De esta manera tenemos almacenado dentro de un solo archivo todo el contenido que un disco o partición de disco podría almacenar.
En nuestro caso fue aplicado dentro de Ubuntu y Debian aunque no debería haber ningún problema si es otra distribución. Empecemos.
Lo primero que haremos es habilitar el módulo
verificamos que esté habilitado
lsmod | grep loop
Si en caso no se encuentra o tampoco aparece en /dev/loop*, agregar en el archivo /etc/modules
Y reiniciamos el SO
Usamos dd para crear un archivo de 1GB, en este archivo será donde irá toda nuestra partición
dd if=/dev/zero of=/opt/vdisk count=2048000
Lo atamos a cualquier loop device
losetup /dev/loop0 /opt/vdisk
Verificamos que esté correctamente atado
losetup -a /dev/loop0: [fe00]:786448 (/opt/vdisk)
Y ya tenemos nuestra partición, luego procedemos a formatearla para poder “montarla” como una unidad cualquiera y poder usarlo.
Creamos el directorio donde montaremos la partición.
Montamos el loop device, recordar usar el mismo loop que uso líneas arriba.
mount /dev/loop0 /mnt/virtual
Si quisieramos desligar el loop device del archivo
umount /mnt/virtual losetup -d /dev/loop0
Probablemente de usarlo frecuentemente sea necesario que se monte automaticamente tras cada inicio del SO, esto es posible añadiendo al archivo /etc/fstab
/opt/vdisk /mnt/virtual ext4 loop=/dev/loop0,user,auto,noatime 0 0
Y listo! Ya tenemos nuestra partición “virtual”.
Just a (great) reference: http://vimregex.com/
Read it = love it!
Munin 2.0 has been released and packaged for Debian, and even backported to Squeeze (from backports.debian.org).
Even though there are still some quirks in this version (or just the Debian packaging), it is far better (more scalable, more powerful and prettier) than version 1.4.
Basically, the following article should cover it all: http://munin-monitoring.org/wiki/CgiHowto2, but doesn’t quite achieve it, so far.
Let’s see together how to install it successfully on Debian Squeeze. I will however not cover the agent (Munin Node), as there is no significant difference between basic installation of its 1.4 and 2.0 versions.
As a first significant performance improvement, Munin is now able to use RRDcached (it fairly reduces the disk I/O pressure on RRD files), and it is fairly easy to setup. Just install package rrdcached (who would have guessed?), then add the following options to OPTS in /etc/default/rrdcached:
OPTS="-s munin -l unix:/var/run/rrdcached.sock -b /var/lib/munin/ -B -j /var/lib/munin/journal/ -F"
This will override its defaults. And of course, restart then the daemon.
Adapt /etc/munin/apache.conf to your likings, in this case, we are going to uncomment all cgi and fastcgi-related blocks.
Install packages libapache2-mod-fcgid and spawn-fcgi, then download the following script and install it as an initscript on your system (e.g. as /etc/init.d/spaw-fcgi-munin-graph and running insserv):
http://files.julienschmidt.com/public/cfg/munin/spawn-fcgi-munin-graph (though this version is still buggy and quite fragile, contact me for a slightly improved version)
apt-get install libapache2-mod-fcgid spawn-fcgi
Add user munin and www-data to group adm, and allow group adm to write to /var/log/munin/munin*-cgi-*.log:
adduser munin adm adduser www-data adm chmod g+w /var/log/munin/munin*-cgi-*.log
Add user www-data to group munin and the opposite:
adduser www-data munin; adduser munin www-data
Start the spawn-fcgi-munin-graph service and check it is indeed running.
Enable the fcgid and rewrite Apache modules and restart the Apache2 service.
Customize /etc/munin/munin.conf to your likings, enabling the (Fast)CGI parts.
Whenever monitoring more than a single host, I recommend moving (i.e. commenting and copying) the localhost definition to some new /etc/munin/munin-conf.d/ file per domain (e.g. beeznest.conf), and add your hosts there, with a meaningful domain name.
La empresa BeezNest Latino S.A.C requiere de (02) colaboradores para trabajar en el área de DESARROLLO bajo proyectos de software libre (Drupal / Chamilo)
Que tenga experiencia técnica COMPROBADA en la mayor cantidad de estos temas:
1. Conocimientos avanzado de Lenguaje de Programación PHP5 (Experiencia mínima de 2 años en desarrollo web)
2. Conocimientos de Base de Datos MySQL
3. Conocimiento avanzado de inglés
4. Conocimiento en herramientas de desarrollo web no-gráficas (clientes FTP/SFTP, SSH, Eclipse, CVS/SVN, …), Linux.
A nivel personal se requiere que el postulante sea:
1. Muy hábil y con buen nivel de comunicación.
2. Proactivo, responsable y puntual.
4. Trabajo en equipo y dispuesto a compartir responsabilidades con sus
5. Habituado al trato directo con diferentes áreas de la organización.
6. Facilidad de Adaptación, sentido crítico y Adquisición de Conocimientos.
– Formar parte del equipo BeezNest.
– Capacitación continua 100% subvencionado por BeezNest para el curso: Experto en PHP web http://www.beeznest.com/es/diplomado/experto-php
– Desarrollar y Perfeccionar sus conocimientos de análisis y desarrollo bajo PHP.
– Horarios de Lunes a Viernes de 9:00am a 6:00pm (1 Hora de refrigerio)
– Crédito de una laptop para el trabajo en BeezNest.
– Excelente ambiente de trabajo.
– Salario competitivo y en crecimiento de acuerdo a las competencias mostradas.
– Jefe directo: Ing. Yannick Warnier (Desarrollador con más de 9 años de experiencia en desarrollo PHP, líder en el desarrollo de la plataforma de código abierto Chamilo, certificado Zend, reconocido por sus contribuciones importantes al cambio de valores sobre el software libre en Perú y las importantes aplicaciones del mismo en el ámbito educativo y empresarial)
Interesados enviar su CV en FORMATO .PDF a firstname.lastname@example.org
(IMPORTANTE ADJUNTAR REFERENCIAS DE TRABAJOS ANTERIORES, INDICANDO NUMERO Y PERSONA DE CONTACTO PARA VERIFICACIÓN)
BeezNest Latino S.A.C
Today we had a server stalled on “Loading kernel modules” at reboot (after adding 12GB of RAM, to 24GB total). The datacenter didn’t know what to do and they put us on a 32-bit rescue mode console from which we couldn’t (obviously) launch a 64bit chroot to update the kernel. The situation seemed pretty desperate.
Our sysadmin, Jérôme, once again came to the rescue. Waiting for the datacenter to respond would have potentially increased reboot time up to 45 minutes. The only possible thing to do: replace initrd with a similar version (just in case the first one would have been damaged). Luckily, we had another similar machine which had been installed the sameday (at 1h difference), so we copied the initrd image over and launched a reboot. After 5 long (never-ending) minutes of no response from the server, it just popped back online.
That’s another one to remember for sysadmin’s day!