GC3: Grid Computing Competence Center

Blog index

GC3 graudates into S3IT
Posted early Tuesday morning, July 1st, 2014
How to create a module that also load a virtualenvironment
Posted late Friday morning, March 7th, 2014
Openstack workshop at GC3
Posted at noon on Saturday, February 22nd, 2014
Moving LVM volumes used by a Cinder storage
Posted late Friday evening, February 21st, 2014
How to configure swift glusterfs
Posted Monday night, February 10th, 2014
Yet another timeout problem when starting many instances at once
Posted late Friday night, February 8th, 2014
Fixing LDAP Authentication over TLS/SSL
Posted Monday night, January 6th, 2014
Linker command-line options for Intel MKL
Posted Saturday night, January 4th, 2014
A virtue of lazyness
Posted Saturday afternoon, December 21st, 2013
(Almost) readable CFEngine logs
Posted Thursday afternoon, December 19th, 2013
CFEngine error: ExpandAndMapIteratorsFromScalar called with invalid strlen
Posted Wednesday afternoon, December 11th, 2013
'Martian source' log messages and the default IP route
Posted Monday afternoon, November 25th, 2013
GC3 takes over maintenance of the Schroedinger cluster
Posted at noon on Monday, November 4th, 2013
Grid Engine: how to find the set of nodes that ran a job (after it's finished)
Posted early Wednesday morning, October 30th, 2013
Python2 vs Python3
Posted at teatime on Friday, September 13th, 2013
GC3Pie 2.1.1 released
Posted Friday evening, September 6th, 2013
Happy SysAdmin day!
Posted mid-morning Friday, July 26th, 2013
Object-oriented Python training
Posted Thursday afternoon, July 25th, 2013
Elasticluster 1.0.0 released
Posted Thursday night, July 18th, 2013
Short Autotools tutorial
Posted at lunch time on Friday, July 5th, 2013
Patch Emacs' PostScript printing
Posted Tuesday evening, June 11th, 2013
Slides of the Object-oriented Python course now available!
Posted Tuesday evening, June 11th, 2013
Automated deployment of CFEngine keys
Posted at midnight, May 31st, 2013
blog/Resize_an_image
Posted Tuesday evening, May 14th, 2013
Join us at the Compute Cloud Experience Workshop!
Posted early Monday morning, April 29th, 2013
GC3 Beamer theme released
Posted at lunch time on Friday, April 5th, 2013
VM-MAD at the International Supercompting Conference 2013
Posted at lunch time on Tuesday, March 26th, 2013
The GC3 is on GitHub
Posted at lunch time on Monday, March 18th, 2013
How to enable search in IkiWiki
Posted Friday afternoon, March 15th, 2013
GC3Pie Training
Posted Thursday night, March 7th, 2013
Object-oriented Python training
Posted Thursday afternoon, March 7th, 2013
Advance Reservations in GridEngine
Posted late Thursday morning, March 7th, 2013
GridEngine accounting queries with PostgreSQL
Posted Wednesday night, March 6th, 2013
Floating IPs not available on Hobbes
Posted at teatime on Tuesday, February 26th, 2013
Notes on SWIFT
Posted mid-morning Tuesday, February 12th, 2013
An online Python code quality analyzer
Posted at lunch time on Saturday, February 9th, 2013
Seminar on cloud infrastructure
Posted Sunday night, February 3rd, 2013
GC3 announce its cloud infrastructure Hobbes
Posted Wednesday afternoon, January 30th, 2013
GC3Pie 2.0.2 released
Posted Monday afternoon, January 28th, 2013
Continuous Integration with Jenkins
Posted at noon on Saturday, January 26th, 2013
On the importance of testing in a clean environment
Posted mid-morning Monday, January 21st, 2013
Weirdness with ImageMagick's `convert`
Posted at teatime on Tuesday, January 15th, 2013
boto vs libcloud
Posted Tuesday afternoon, January 15th, 2013
Resolve timeout problem when starting many instances at once
Posted at lunch time on Monday, January 7th, 2013
Proceedings of the EGI Community Forum 2012 published
Posted at teatime on Monday, December 17th, 2012
SGE Workaround Installation
Posted at lunch time on Tuesday, December 4th, 2012
How to pass an argument of list type to a CFEngine3 bundle
Posted mid-morning Thursday, November 22nd, 2012
GC3 at the 'Clouds for Future Internet' workshop
Posted mid-morning Wednesday, November 21st, 2012
GC3 attends European Commission Cloud Expert Group
Posted mid-morning Monday, October 29th, 2012
SwiNG - SDCD2012 event
Posted at lunch time on Monday, October 22nd, 2012
Large Scale Computing Infrastructures class starts tomorrow!
Posted late Tuesday afternoon, September 25th, 2012
From bare metal to cloud at GC3
Posted mid-morning Monday, September 24th, 2012
GC3 at the EGI Technical Forum 2012
Posted Thursday night, September 20th, 2012
Training on GC3Pie and Python
Posted late Friday evening, September 7th, 2012
GC3Pie used for research in Computational Quantum Chemistry
Posted late Thursday afternoon, September 6th, 2012
``What's so great about MPI or Boost.MPI?''
Posted mid-morning Thursday, September 6th, 2012
blog/How to generate UML diagram with `pyreverse`
Posted late Thursday morning, August 23rd, 2012
Git's `rebase` command
Posted mid-morning Friday, June 15th, 2012
AppPot 0.27 released!
Posted at noon on Thursday, June 14th, 2012
Urban computing - connecting to your server using `mosh`
Posted mid-morning Wednesday, June 6th, 2012
Whitespace cleanup with Emacs
Posted Tuesday afternoon, June 5th, 2012
Translate pages on this site
Posted Thursday evening, May 31st, 2012
Scientific paper citing GC3Pie
Posted Wednesday evening, May 30th, 2012
GC3 attends Nordugrid 2012 conference
Posted at lunch time on Wednesday, May 30th, 2012
How the front page image was made
Posted late Wednesday evening, May 16th, 2012
GC3 blog launched!
Posted late Tuesday evening, May 15th, 2012
New GC3 Wiki now online!
Posted Tuesday evening, May 15th, 2012
AppPot paper on arXiv
Posted Tuesday evening, May 15th, 2012
GC3 at the EGI Technical Forum 2011
Posted Tuesday evening, May 15th, 2012

How to configure swift to use a glusterfs storage on OpenStack Havana

First of all we have to clarify what do we mean by use a glusterfs storage. There are two possibilities:

  1. use glusterfs as an opaque storage for swift, allowing direct (easy) access only from swift

  2. deploy gluster-swift (https://launchpad.net/gluster-swift) so that the very same file can be easily accessed via glusterfs and via swift.

The problem with option 1) is that files uploaded via swift will have an opaque path, and your files will be stored on gluster on a path like /glusterfs/swift/objects/53521/5ff/344464ea64bc44d8d4df57427e00c5ff/1391806952.27457.data, making them basically unaccessible but via swift.

Option 2) is a bit trickier and less supported, but it will save objects in the gluster filesystem using a direct mapping between swift account/container/object name and gluster path. For instance, the object called bar in the container foor of the account demo will be saved in /glusterfs/swift/demo/foo/bar, and will have a swift URL like http://A.B.C.D:8080/v1/AUTH_demo/foo/bar

How to install a vanilla swift to write on a gluster filesystem

First of all you need to have a gluster filesystem up&running (I used elasticluster for testing) and create a volume. In my case the volume is called default and mounted as /glusterfs/swift.

Add the havana repository:

root@gluster-data001:~# cat > /etc/apt/sources.list.d/havana.list <<EOF
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main
EOF

Remember to add the gpg key of the ubuntu-cloud repository:

root@gluster-data001:~# apt-key adv --recv-keys --keyserver \
    keyserver.ubuntu.com 5EDB1B62EC4926EA

Install swift packages:

root@gluster-data001:~# apt-get update
root@gluster-data001:~# apt-get install swift swift-proxy \
    swift-account swift-container swift-object memcached

Edit the swift configuration files. Ensure the following line is present in all /etc/swift/{account,container,object}-server.conf files:

devices = /glusterfs

Please note that in this case swift will assume that the devices you are going to add to swift are mounted inside the /glusterfs filesystem. Since the services will actually check this, you need to mount gluster in a subdirectory of /glusterfs, which will have exactly the name of the swift device you will add later on.

In our case the device is called swift, so we have to create /glusterfs/swift directory and put this on my /etc/fstab:

gluster-data001:default /glusterfs/swift glusterfs defaults 0 0

Also, we have to change the ownership of that directory, as swift will probably run as a non-privileged user. By default the user is swift, so:

chown swift.swift /glusterfs/swift

Now you have to create the rings in /etc/swift:

root@gluster-data001:~# cd /etc/swift

root@gluster-data001:~/swift# swift-ring-builder container.builder create 18 1 1
root@gluster-data001:~/swift# swift-ring-builder account.builder create 18 1 1
root@gluster-data001:~/swift# swift-ring-builder object.builder create 18 1 1

root@gluster-data001:~/swift# swift-ring-builder object.builder add z1-127.0.0.1:6000/swift 100
root@gluster-data001:~/swift# swift-ring-builder container.builder add z1-127.0.0.1:6001/swift 100
root@gluster-data001:~/swift# swift-ring-builder account.builder add z1-127.0.0.1:6002/swift 100

root@gluster-data001:~/swift# swift-ring-builder object.builder rebalance
root@gluster-data001:~/swift# swift-ring-builder container.builder rebalance
root@gluster-data001:~/swift# swift-ring-builder account.builder rebalance

Please note that we added devices without replication, since in our case replication is done by gluster itself.

This should be enough to have swift up&running. The default configuration will use tmepauth and you will have to define the users in the /etc/swift/proxy-server.conf configuration file. Moreover, you may not be able to create accounts because the swift storage doesn't contain any account yet, so you may want to update the value of account_autocreate value in section [app:proxy-server]:

account_autocreate = true

Keystone authentication

Since we also want to enable keystone authentication we are going to install one more package:

root@gluster-data001:~# apt-get install python-keystone

Add two more stanza to the proxy-server.conf file:

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = cloud2.gc3.uzh.ch
auth_port = 35357
auth_protocol = http
auth_uri = http://cloud2.gc3.uzh.ch:5000/
admin_tenant_name = service
admin_user = swift
admin_password = PASSWORD
delay_auth_decision = 1

[filter:keystoneauth]
use = egg:swift#keystoneauth
# Operator roles is the role which user would be allowed to manage a
# tenant and be able to create container or give ACL to others.
operator_roles = Member, admin

(Q: why do I have delay_auth_decision = 1?) (A: from swift documentation http://docs.openstack.org/developer/swift/overview_auth.html#keystone-auth:

If support is required for unvalidated users (as with anonymous
access) or for tempurl/formpost middleware, authtoken will need to
be configured with delay_auth_decision set to 1.

)

and modify the main pipeline in order to insert authtoken keystoneauth before proxy-server:

[pipeline:main]
# WAS: pipeline = catch_errors healthcheck cache ratelimit tempauth proxy-server
pipeline = catch_errors healthcheck cache ratelimit authtoken keystoneauth proxy-server

However, you may get this error while starting the proxy server:

root@gluster-frontend001:/etc/swift# swift-init proxy-server start
Starting proxy-server...(/etc/swift/proxy-server.conf)
Traceback (most recent call last):
  File "/usr/bin/swift-proxy-server", line 22, in <module>
    run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)
  File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 256, in run_wsgi
    loadapp(conf_path, global_conf={'log_name': log_name})
  File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 107, in wrapper
    return f(conf_uri, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
    return loadobj(APP, uri, name=name, **kw)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 271, in loadobj
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
    return loader.get_context(object_type, name, global_conf)
  File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 55, in get_context
    object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 450, in get_context
    global_additions=global_additions)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 562, in _pipeline_app_context
    for name in pipeline[:-1]]
  File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 55, in get_context
    object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, in get_context
    section)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 517, in _context_from_explicit
    value = import_string(found_expr)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 22, in import_string
    return pkg_resources.EntryPoint.parse("x=" + s).load(False)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1989, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/__init__.py", line 18, in <module>
    from keystone.middleware.core import *
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 21, in <module>
    from keystone.common import utils
  File "/usr/lib/python2.7/dist-packages/keystone/common/utils.py", line 32, in <module>
    from keystone import exception
  File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 63, in <module>
    class ValidationError(Error):
  File "/usr/lib/python2.7/dist-packages/keystone/exception.py", line 64, in ValidationError
    message_format = _("Expecting to find %(attribute)s in %(target)s."
NameError: name '_' is not defined

This is a known bug: https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1231339 and it's solved by editing the file /usr/lib/python2.7/dist-packages/keystone/exception.py and inserting the line

from keystone.openstack.common.gettextutils import _

after

from keystone.openstack.common import log as logging

After that, you have of course to configure keystone and add the swift service and endpoint on the controller node.

root@cloud2:~# keystone service-create --name=swift \
    --type=object-store --description="Swift Service"

Get the service id and use it for the endpoint:

root@cloud2:~# keystone endpoint-create --region RegionOne \
  --service 00f25d7bf64148d7a1bd0f3c6d2eb39e \
  --publicurl 'http://130.60.24.55:8080/v1/AUTH_$(tenant_id)s' \
  --internalurl 'http://130.60.24.55:8080/v1/AUTH_$(tenant_id)s' \
  --adminurl http://130.60.24.55:8080/v1

Test it!

If everything is working fine you should be able to list the containers for the demo account with:

root@cloud2:~# swift --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0 \
    -U demo:demouser -K demopwd list

and upload a file with

root@cloud2:~# swift --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0 \
    -U demo:demouser -K demopwd upload antonio /etc/fstab

and download it with

root@cloud2:~# swift --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0 \
    -U demo:demouser -K demopwd download antonio /etc/fstab

S3 tokens

TODO: check how to configure swift in order to also accept EC2 tokens, in order to be able to use e.g. boto libraries to access the swift storage.

How to install gluster-swift

To install gluster-swift you need to clone the git repository https://github.com/gluster/gluster-swift.git Please note that the master branch is the development version for the Icehouse version of OpenStack, while the havana branch is the one to be used for Openstack Havana.

root@gluster-data001:~# apt-get install --yes git
root@gluster-data001:~# git clone https://github.com/gluster/gluster-swift.git
root@gluster-data001:~# apt-get install --yes python-pip python-setuptools
root@gluster-data001:~# cd gluster-swift
root@gluster-data001:~/gluster-swift# git branch havana remotes/origin/havana
root@gluster-data001:~/gluster-swift# git checkout havana
root@gluster-data001:~/gluster-swift# python setup.py install

The default configuration files for swift stored in gluster-swift/etc are good enough for starting

root@gluster-frontend001:~/gluster-swift# cp etc/*conf-gluster /etc/swift/; rename -f 's/.conf-gluster/.conf/g' /etc/swift/*.conf-gluster

but you have to update the devices option in {account,container,object}-server.conf configuration files:

root@gluster-frontend001:~/gluster-swift# sed -i 's:/mnt/gluster-object:/glusterfs/swift:g' /etc/swift/*.conf
root@gluster-frontend001:~/gluster-swift# grep devices /etc/swift/*.conf 
/etc/swift/account-server.conf:devices = /glusterfs/swift
/etc/swift/container-server.conf:devices = /glusterfs/swift
/etc/swift/object-server.conf:devices = /glusterfs/swift

In order to be able to see the volume (in our case default) you need to add also the frontend node (which is not a data server in our setup) to the trusted storage pool.

You need to install the gluster package and run the glusterd daemon:

root@gluster-frontend001:~# apt-get install gluster

Connect to a node member of the trusted pool and add the frontend node as a peer:

root@gluster-data001:~# gluster peer probe gluster-frontend001
peer probe: success

From the frontend node you should now be able to see the gluster volume:

root@gluster-frontend001:~/gluster-swift# gluster volume info

Volume Name: default
Type: Distributed-Replicate
Volume ID: 4db64aea-fb88-4a5e-865e-00ffe2acbaec
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: gluster-data006:/srv/gluster/brick
Brick2: gluster-data004:/srv/gluster/brick
Brick3: gluster-data008:/srv/gluster/brick
Brick4: gluster-data001:/srv/gluster/brick
Brick5: gluster-data005:/srv/gluster/brick
Brick6: gluster-data002:/srv/gluster/brick
Brick7: gluster-data007:/srv/gluster/brick
Brick8: gluster-data003:/srv/gluster/brick

Create the ring:

root@gluster-data001:/etc/swift# gluster-swift-gen-builders default
Ring files are prepared in /etc/swift. Please restart object store services

and now start the services:

root@gluster-frontend001:~/gluster-swift# for service in swift-{object,container,account,proxy}; do start $service; done
swift-object start/running
swift-container start/running
swift-account start/running
swift-proxy start/running

Test it!

At this point we don't have any authorization mechanism, so everyone can create containers by just running:

root@gluster-frontend001:~# curl -v -X PUT http://localhost:8080/v1/AUTH_default/mycontainer
* About to connect() to localhost port 8080 (#0)
*   Trying 127.0.0.1... connected
> PUT /v1/AUTH_default/mycontainer HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8080
> Accept: */*
> 
< HTTP/1.1 201 Created
< Content-Length: 0
< Content-Type: text/html; charset=UTF-8
< X-Trans-Id: txd4f40d07966e41c2b162a-0052f6409c
< Date: Sat, 08 Feb 2014 14:35:08 GMT
< 
* Connection #0 to host localhost left intact
* Closing connection #0

Note the HTTP/1.1 201 Created line, and check that the directory exists:

root@gluster-frontend001:~# ls /glusterfs/swift/default/mycontainer/ -ld
drwxr-xr-x 2 root root 16384 Feb  8 15:35 /glusterfs/swift/default/mycontainer/

You can also upload files:

root@gluster-frontend001:~# curl -X PUT -T /etc/fstab http://localhost:8080/v1/AUTH_default/mycontainer/fstab
root@gluster-frontend001:~# ls /glusterfs/swift/default/mycontainer/ -l
total 1
-rwxr-xr-x 1 root root 592 Feb  8 15:36 fstab

or download them:

root@gluster-frontend001:~# curl  http://localhost:8080/v1/AUTH_default/mycontainer/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/vda1 during installation
UUID=5bc66afd-122b-40c3-8294-d0f0ef90d06b /               ext3    errors=remount-ro 0       1
gluster-data006:default /glusterfs/swift glusterfs defaults 0 0

Keystone authentication

Keystone authentication is configured exactly as the standard swift. However, when using gluster-swift, accounts are not automatically created. Instead, one swift account corresponds exactly to one gluster volume, and the name of the gluster volume must be the same as the tenant id (not the tenant name).

Moreover, everytime you create a tenant you have to re-create the swift ring.

In our case, the tenant demo has id a9b091f85e04499eb2282733ff7d183e

root@cloud2:~# keystone tenant-get demo
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |                                  |
|   enabled   |               True               |
|      id     | a9b091f85e04499eb2282733ff7d183e |
|     name    |               demo               |
+-------------+----------------------------------+

Then we create one brick on each data node

root@gluster-frontend001:~# pdsh -R ssh -w gluster-data00[1-8] mkdir /srv/gluster/tenant-demo

and we create (and start) a volume, with the same name as the tenant id (we use force because the brick is on a directory, not a mounted filesystem)

root@gluster-frontend001:~# gluster volume create a9b091f85e04499eb2282733ff7d183e replica 2 gluster-data00{1,2,3,4,5,6,7,8}:/srv/gluster/tenant-demo force
volume create: a9b091f85e04499eb2282733ff7d183e: success: please start the volume to access data
root@gluster-frontend001:~# gluster volume start a9b091f85e04499eb2282733ff7d183e 
volume start: a9b091f85e04499eb2282733ff7d183e: success

at this point we need to re-create the swift ring:

root@gluster-frontend001:~# gluster-swift-gen-builders a9b091f85e04499eb2282733ff7d183e
Ring files are prepared in /etc/swift. Please restart object store services

Please note that if you want to create a ring with multiple tenants, you have to specify all of them on the command line of gluster-swift-gen-builders

Now you can run (or restart) the services:

root@gluster-frontend001:~# for service in swift-{object,container,account,proxy}; do start $service; done
swift-object start/running
swift-container start/running
swift-account start/running
swift-proxy start/running

and test it with swift command line:

root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo:antonio -K antopwd  list
root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo:antonio -K antopwd  post mycontainer
root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo:antonio -K antopwd  list
mycontainer
root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo:antonio -K antopwd  upload mycontainer /etc/fstabetc/fstab
root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo:antonio -K antopwd  list mycontainer
etc/fstab

On the swift node you don't have to mount the gluster filesyste, as it is automatically mounted if needed:

root@gluster-frontend001:~# df -h |grep glusterfs
localhost:a9b091f85e04499eb2282733ff7d183e   79G  4.8G   71G   7% /glusterfs/swift/a9b091f85e04499eb2282733ff7d183e

Troubleshooting

If the volume corresponding to the tenant is not present you will get an error which is quite obscure. The swift client will get a:

root@cloud2:~# swift  --os-auth-url http://cloud2.gc3.uzh.ch:5000/v2.0  -U demo2:antonio -K antopwd  list
Account GET failed: http://130.60.24.55:8080/v1/AUTH_2b25512c3457431bb327472aa1a56618?format=json 503 Internal Server Error  [first 60 chars of response] <html><h1>Service Unavailable</h1><p>The server is currently

while on the server log you will see:

Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 proxy-server Authenticating user token
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 proxy-server Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 proxy-server Storing f5695ad9b67fbe3b268dfd2555cfa391 token in memcache
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 account-server STDOUT: ERROR:root:No export found in ['default', 'a9b091f85e04499eb2282733ff7d183e'] matching drive, volume_not_in_ring (txn: tx0b7c73a009e3447bb5f8a-0052f946fb)
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 account-server 127.0.0.1 - - [10/Feb/2014:21:39:07 +0000] "HEAD /volume_not_in_ring/1/AUTH_2b25512c3457431bb327472aa1a56618" 507 - "tx0b7c73a009e3447bb5f8a-0052f946fb" "HEAD http://130.60.24.55:8080/v1/AUTH_2b25512c3457431bb327472aa1a56618" "proxy-server 23129" 0.1375 ""
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 proxy-server ERROR Insufficient Storage 127.0.0.1:6012/volume_not_in_ring (txn: tx0b7c73a009e3447bb5f8a-0052f946fb)
Feb 10 22:39:07 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9 proxy-server Node error limited 127.0.0.1:6012 (volume_not_in_ring) (txn: tx0b7c73a009e3447bb5f8a-0052f946fb)

This is clearly a bug, I hope it will be fixed in the icehouse release.

top