Archive

Archive for the ‘Virtualization’ Category

Using php5-memcached to store sessions on distributed servers

This article is an extension of the previous article about storing sessions in Memcached with PHP.

Using Memcached for sessions storage is generally a matter of speed (storing them in memory is faster than on disk) and of scalability (using a Memcached server allows you to have several web servers serving the same PHP application in a seamless way).

In the previous article, we mentioned (3 years ago) that the php5-memcached extension did not seem to manage the storage of sessions quite well, or at least that the configuration of such setup was not well documented. This time, we’ve been able to do it with php5-memcached. This time also, we’re talking about distributing Memcached on several servers. Typically, in a cloud environment with several web servers and no dedicated memcached server, or a need for some kind of fail-over solution for the memcached server. We’re setting this up on 3 Ubuntu 14.04 servers in 64bit that act as load-balanced web servers behind a load balancer managed by Nginx. If you use another distribution or operating system, you might need to adapt the commands we’re showing here.

Installing Memcached + PHP

So the few commands you need to launch first are to install the required software, on the 3 web servers (you can either do that on one of them then replicate the image if you are using Cloud instances, or you can install simultaneously on your 3 web servers using ClusterSSH, for example, with the cssh server1 server2 server3 command) :

sudo apt-get install memcached php5-memcached
sudo service apache2 restart

The second command is to make sure Apache understands the php5-memcached extension is there.

In order to enable the connection to Memcached from the different load-balanced servers, we need to change the Memcached configuration to listen on the external IP address. Check the IP address with /sbin/ifconfig. The Memcached configuration file is located in /etc/memcached.conf. Locate the “-l” option and change “127.0.0.1” for your external IP, then save and close the file. Note that this might introduce a security flaw, where you are possibly opening the connection to your Memcached server to the outside world. You can prevent that using a firewall (iptables is a big classic, available on Ubuntu)

Next, restart the Memcached daemon:

sudo service memcached restart

Now you have a Memcached server running on each of your web servers, accessible from the other web servers. To test this, connect to any of your web servers and try a telnet connection on the default Memcached port: 11211, like so:

user@server1$ telnet ip-server2 11211

To get out of there, just type “quit” and Enter.

OK, so now we have 3 Memcached servers, we only need to wrap up configuring PHP to use these Memcached servers to store sessions.

Configuring PHP to use Memcached as session storage

This is done by editing your Apache VirtualHost files (on each web server) and adding (before the closing </VirtualHost> tag) the following PHP settings:

 php_admin_value session.save_handler memcached
 php_admin_value session.save_path "ip-server1:11211,ip-server2:11211,ip-server3:11211"

Now reload your web server:

 sudo service apache2 reload

You should now be able to connect to your web application using the distributed Memcached server as a session storage (you usually don’t need to change anything in your application itself, but some might exceptionally define their own session storage policy).

The dangers of using a distributed Memcached server

Apart from the possible open access to your Memcached server previously mentioned, which is particularly security-related, you have to take another danger, mostly high-availability related, into account.

When using a distributed Memcached server configuration, it is important to understand that it works as sharded spaces configuration. That is, it doesn’t store the same sessions over on the various available Memcached server. It only stores each single session in one single server. The decision of where it will store the session is out of the context of this article, but it means that, if you have 300 users with active sessions on your system at any one time, and one of your web servers goes down, you still have 2 web servers and 2 Memcached servers, but ultimately around 100 users will loose their session (that was stored on the web server that went down).

Worst: the PHP configuration will not understand this, and still try to send sessions to the server that was considered to hold these 100 sessions, making it impossible for the users to login again until the corresponding Memcached server is back up (unless you change the configuration in your PHP configuration).

This is why you have to consider 2 things, and why this article is just one step in the right direction:

  • you should configure the Memcached servers from inside your application for the sessions management (as such, you should have a save_handler defined inside it and check for the availability of each server *before* you store the session in it)
  • if your sessions are critical, you should always have some kind of data persistence mechanism, whereby (for example), you store the session in the database once every ten times it is modified

We hope this was of some use to you in understanding how to use Memcached for sessions storage in PHP. Please don’t hesitate to leave questions or comments below.

Using Chamilo juju charm to setup a dev environment on Digital Ocean

June 23, 2014 8 comments

If you’re in a hurry/on speed, know this:

  • this procedure is slightly more difficult (so longer) than installing the charm on Amazon
  • you can skip directly to “Installing Juju”
  • if you already have juju installed, you can skip to the last 2 lines of the “Installing juju” section
  • if you already have juju-docean installed and configured, you can skip directly to “Provisioning VMs”
  • otherwise, just continue reading, it’s worth a few minutes…

This tutorial regroups a lot of advanced notions, so if you want to know more about one of the following elements, please follow these links:

Before anything else, please note that the following is highly experimental. There are still a series of issues that should be worked out in order to make this process failproof.

Basic setup

Before we start using commands and stuff, you’ll have to note the following:

  • We are using a Chamilo Charm developed by José Antonio Rey (kudos to him) as a voluntary contribution to the project
  • Charms are configurations to install applications (and stuff) inside the Juju framework
  • The Juju framework is developed by the Ubuntu team, so we’re using an Ubuntu (14.04) desktop (or in this case laptop) to launch all the following
  • Digital Ocean is one cloud hosting provider, which is particularly cheap and good for development purposes. The “default” environment for Juju is Amazon, so we’ll have a few additional steps because of this choice. The Digital Ocean plugin to Juju is developed by geekmush on Github, and as far as I know he is not related to either Ubuntu nor Digital Ocean, so he is also worth praising for his contribution
  • Chamilo requires a web server and a database server. In this Charm, it is assumed that we want both of these on separate virtual machines, so you will need two of them (unless you change the parameters a little)
  • Juju is written in Go but relies on several Python libraries. As such, you’ll have to have python installed on your system and maybe Juju will shout because it is missing a few dependencies. Notably, I installed python3-yaml to avoid a few warnings (it is required for the following, although the installer for Juju says it’s optional)

Installing Juju

On a default Ubuntu desktop installation, you’ll have to install Juju first. Because we are going to use Juju connected to Digital Ocean, we need a recent version of Juju, so let’s add it the unconventional way (with the ppa), launching the following on the command line:

sudo add-apt-repository ppa:juju/devel
sudo apt-get update && apt-get install juju
juju version

For some reason, in my case, this created my home directory’s .juju/ folder with root permissions, which then prevented me to reconfigure my environment (requirement for the Digital Ocean Juju plugin), so I changed permissions (my user is “ywarnier”, so change that to your user):

sudo chown -R ywarnier:ywarnier .juju

Then we need to install the juju-docean plugin:

sudo apt-get install python3-yaml
sudo pip install -U juju-docean

Setting up Digital Ocean access

Now we need to configure our Digital Ocean (D.O.) API so the system will be able to call D.O. in our place and create instances (and stuff).

You first need to grab your API key, client ID and SSH key ID from the Digital Ocean interface. You can do that from the Digital Ocean API page. Obviously, you need a Digital Ocean account to do this and a few bucks of credit (although you can get $10 free credit from several places). If your API key says “Hidden”, that’s because you must have it stored somewhere already (for other services?). If you don’t, you’ll have to re-generate one. Your SSH key ID is the name you gave to the SSH key you use from your computer to connect to your new instances. If you don’t have it, that’s probably because you haven’t configured any. Please do in the “SSH Keys” menu item on the left side of your D.O. panel.

 export DO_CLIENT_ID=aseriesof21alphanumericalcharacters
 export DO_SSH_KEY="user@computer"
 export DO_API_KEY=aseriesof32characters

Setting up the Digital Ocean Juju environment

Now we need a bit of manual config to be able to use Digital Ocean (last bit, promised). Edit the ~/.juju/environments.yaml file and paste the following:

environments:
 digitalocean:
 type: manual
 bootstrap-host: null
 bootstrap-user: root

Just a note: the “type: manual” line implies it is a bit more complicated than on amazon later on, and we will have to launch a few more commands to provision new machines *before* we deploy Chamilo.

Generating the Juju environment

Now we’re going to create our Juju controller. The Juju controller can be an independent Virtual Machine (VM), or it can be the same as the one on which you will deploy Chamilo. It all depends on your budget and your requirements.

juju docean bootstrap --constraints="mem=1g, region=nyc1"
  2014/06/22 11:50.24:INFO Launching bootstrap host
  2014/06/22 11:51.29:INFO Bootstrapping environmen

Note that we took a decision to use a 1GB (RAM) VM here (mem=1g), in a datacenter in New York (region=nyc1). For the record, I tried creating them in nyc2, which is also a valid D.O. datacenter, but it failed miserably (sometimes not creating the VM, sometimes creating it without IP, sometimes creating it fully, but in the end never returning with a proper success response for my environment to be created), so sticking to nyc1 is probably a reasonable time-saver.

Provisioning VMs

To be able to deploy Chamilo, we’ll use two VMs: one for the web server and one for the database

juju docean add-machine -n 2 --constraints="mem=1g, region=nyc1"
2014/06/22 12:44.59:INFO Launching 2 instances
2014/06/22 12:46.42:INFO Registered id:1908893 name:digitalocean-8d14c9bc671555ff872d8d6731f84d68 ip:198.199.82.172 as juju machine
2014/06/22 12:49.08:INFO Registered id:1908894 name:digitalocean-a9ba29cfe55549f58e5f7e365199c5ed ip:208.68.39.19 as juju machine

Now, the “-n 2” above allows you to create these 2 instances, but you could also launch 2 different instances of different properties, doing it one by one. In our case, I suggest you use version Trusty of Ubuntu for the MySQL machine, to avoid a little bug in the Precise version of the charm:

juju docean add-machine --constraints="mem=2g, region=nyc1"
juju docean add-machine --series=trusty --constraints="mem=1g, region=nyc1"

The important thing here being that you can later identify the machine itself by a simple ID, using juju status:

juju status
environment: digitalocean
machines:
 "0":
  agent-state: started
  agent-version: 1.19.3
  dns-name: 192.241.142.154
  instance-id: 'manual:'
  series: precise
  hardware: arch=amd64 cpu-cores=1 mem=994M
  state-server-member-status: has-vote
 "1":
  agent-state: started
  agent-version: 1.19.3
  dns-name: 198.199.82.172
  instance-id: manual:198.199.82.172
  series: precise
  hardware: arch=amd64 cpu-cores=1 mem=994M
 "2":
  agent-state: started
  agent-version: 1.19.3
  dns-name: 208.68.39.19
  instance-id: manual:208.68.39.19
  series: trusty
  hardware: arch=amd64 cpu-cores=1 mem=994M

If you made a mistake at some point or just wanna try things out, you can destroy these instances with

juju docean terminate-machine 1

where “1” is the ID of the machine, as shown above before each of them.

Deploying Chamilo

Now we’ve got our machines, we just need to deploy the Chamilo Charm and the MySQL Charm (you need MySQL to run Chamilo):

juju deploy cs:~jose/chamilo --to 1
juju deploy mysql --to 2

Please note that the “–to n” option is to specify on which machine you want to deploy the selected service.

Now, we need to configure Chamilo a little. We’re going to give it a domain name (you’ll have to redirect this domain name to the IP of the first machine – the one with the Chamilo service – in order to use it when ready) and a password for the “admin” user (the user created by default):

juju set chamilo domain=test.chamilo.net pass=blabla

Now we still need to tell Juju to link the Chamilo service with the MySQL service:

juju add-relation chamilo mysql

And finally, apply all the above and expose the chamilo service to the public:

juju expose chamilo

If something goes wrong with a service, you can always remove it with:

juju destroy-service chamilo

You can replace “chamilo” by the service with which you are having the issue, of course. If that doesn’t work out, you can always remove (terminate) the machine itself (see above).

Useful tricks

You can connect at any time to any of your virtual machines through the command

juju ssh chamilo/0

where “chamilo/0” is the name appearing below “units” in your services.

You can check the status of all your instances with

juju status

Note that, sometimes, you might end up with dozens or hundreds of instances. In this case, it won’t be as practical to show the status of all instances (I have no solution for that now, but I’m sure there is a way to filter the results of a juju status).

You can launch a command on the virtual machines’ command line like this:

juju run --service chamilo "tail /var/log/juju/unit-chamilo-0.log"

This way, you are actually executing the command remotely and getting the results locally.

You can also see the error log locally, connecting in SSH (first) and then launching:

 tail /var/log/juju/unit-chamilo-0.log

Obviously, that gives you a little more flexibility.

Notes about unexpected errors

One of the “silent” things is that Juju considers the default machine to be Ubuntu Precise. In the case of MySQL, the default Charm is configured for Trusty. This means that if you want to install this package, you need to install a virtual machine in Trusty. Otherwise, you might get some other issues. In my case, the Precise Charm didn’t really work (missing yaml), so I decided to go for Trusty.

You can choose the distribution of your machine with –series=trusty, for example:

juju docean add-machine --series=trusty --constraints="mem=2g, region=nyc1"

We tested the chamilo charm relatively extensively.

Unmounting the whole thing

If this was just a test, and you’re happy, maybe you want to remove everything. If so, the quickest way to do that is to launch a destroy-environment command, but you will first need to destroy each machine and, before that, each services that :

juju destroy service chamilo mysql
juju destroy machine 1 2
juju destroy-environment digitalocean

This should reasonnably quickly remove the whole setup.

You should still check your Digital Ocean’s dashboard, though, as apparently it doesn’t always delete the nodes you created with Juju…

Quick commands list for the impatient

Assuming you’re running Ubuntu 14.04 and that you know which values to change in the commands below:

sudo add-apt-repository ppa:juju/devel
sudo apt-get update && apt-get install juju
sudo chmod -R 0700 .juju
sudo apt-get install python3-yaml
sudo pip install -U juju-docean
export DO_CLIENT_ID=aseriesof21alphanumericalcharacters 
export DO_SSH_KEY="user@computer" 
export DO_API_KEY=aseriesof32characters
juju docean bootstrap --constraints="mem=1g, region=nyc1"
juju docean add-machine --constraints="mem=2g, region=nyc1"
juju docean add-machine --series=trusty --constraints="mem=1g, region=nyc1"
juju deploy cs:~jose/chamilo --to 1
juju deploy mysql --to 2
juju set chamilo domain=test.chamilo.net pass=blabla
juju add-relation chamilo mysql
juju expose chamilo

And connect your browser to test.chamilo.net (that you must have redirected to the corresponding IP first) and login with admin/blabla.


								

BeezNest to launch cloud hosting for Chamilo LMS

November 19, 2012 Leave a comment

At the beginning of December, BeezNest will be launching its new, low-cost cloud hosting service for Chamilo LMS 1.9. This will enable non-technical teachers to host their own Chamilo LMS portal by only completing a form and scheduling a monthly payment depending on the number of users subscribed to the platform.

We hope this will give teachers a greater commodity regarding their course material and will give a gentle push in the right direction for the improvement of educational methodologies, making it easier for them to put their contents online and keep track of their students’ progress.

This cloud hosting will respect the philosophy of free software by giving you full access to the application’s souce code.

More information about this very soon!

PHP FastCGI hangs

September 16, 2011 Leave a comment

Just as a self reminder, this blog post explains why php-cgi might hang/stop silently when your load increases, and how to fix it, where it explains (in short) that fast-cgi should have the parameter PHP_FCGI_MAX_REQUESTS should equal 1000 or more (this can be done in /etc/default/php-fastcgi on my Ubuntu server).

 

Also worth a look is this page which has nothing to do with fast-cgi but describes how to increase the timeout in nginx when there is a proxy: http://blogs.agilefaqs.com/2011/07/21/upstream-connection-timed-out-error-in-nginx/ (that is if you get the timed-out connection error and do not find the setting above in the first place). In short:

server {
    listen       80;
    server_name  my.website.com;
    server_name_in_redirect off;
    port_in_redirect        off;

    location / {
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Real-Host  $host;
        proxy_read_timeout 120;
        ...
    }
    ...
}

La nube y sus defectos

September 16, 2010 2 comments

Acabo de ver un video de descripción en Español de lo que es el “Cloud Computing”, o mejor dicho, las ventajas que provee a los negocios que quieren adoptarlo.

Pues he sido muy desilusionado por la poca sinceridad de este video. Lo pongo aquí:

Hay un limite a la simplificación de los temas complejos, no? Pasado un cierto nivel, empieza a ser engaño, en mi opinión.
Aquí los autores me parecen “olvidarse” de una serie de problemas importantes del cloud computing:

  • no todas las aplicaciones o todos los tipos de aplicaciones soportan este tipo de arquitectura
  • una aplicación compartida genera riesgos de mezcla de información entre usuarios, que son más peligrosos todavía cuando la aplicación propuesta no se puede revisar (=código cerrado) como los últimos casos de Apple siendo robado de montones de datos relacionados con el iPad en EE.UU. (perdieron datos de sus clientes en un ataque de seguridad)
  • el desarrollo de una aplicación propia (de no encontrar el tipo de aplicación deseado) toma mucho más tiempo desarrollar para este concepto (ver analizis de Drupal + Amazon en acquia.com por ejemplo) porque genera una serie de paradigmas distintos que surgen al momento de implementar la aplicación

Los argumentos: más seguro, más barrato, más flexible solo se aplican a soluciones ya existentes desde hace mucho. Los mismos argumentos valen para un servidor único (en modo no-cloud) con aplicaciones compartidas (modo ahora “clásico” del Application Service Provider).

En resumen, aunque el cloud computing sea útil para volumenes muy altos de datos, en muchos casos no se revela necesario para proporcionar un excelente nivel de calidad a bajo costo.

Un reciente artículo de Symantec reporta faltas graves en la adopción del cloud por las direcciones informáticas de grandes empresas. Ver el artículo de resumen de Indexel (en Francés) aquí

VirtualBox

May 31, 2009 Leave a comment

VirtualBox (http://www.virtualbox.org/) un logiciel libre basé sur Qemu (http://www.nongnu.org/qemu/)[1] et édité par la société Innotek (rachetée récemment par Sun), est un excellent outil pour virtualiser des stations de travail sur une station de travail[2].
Non content d’être très fluide, il tourne sous Windows, GNU/Linux et (Open)Solaris[3].

Il en existe une version libre (VirtualBox Open Source Edition aka OSE) et une autre, propriétaire, offrant plus de fonctionnalités (surtout au niveau du hardware émulé).

La version OSE est disponible dans toutes les bonnes distributions GNU/Linux, tandis que la version de Sun n’est disponible que depuis leur site. Ils fournissent même des sources APT[4] pour qu’il soit facile de l’installer sur une Debian ou dérivée.

Attention cependant, la version OSE utilise aussi des fichiers de configuration des machines virtuelles dans une version nettement plus ancienne que celle du VirtualBox propriétaire. VirtualBox proposera au démarrage d’upgrader le format s’il y a lieu.

Read more…

%d bloggers like this: