Show me the code! – By Davanum Srinivas

August 11, 2016

Kubernetes Dev Bootstrap Resources

Filed under: Uncategorized — Davanum Srinivas @ 3:54 pm

Kubernetes development is centered around Github workflows. So the links for Issues, Pull Requests and Commits are as follows:

Starting with 1.4, Kubernetes dev folks are using another repository to track Features:

Must-read to get the lay of the land:

Kubernetes is organized around Special Interest Groups (SIG’s):

How do i follow my work?  https://k8s-gubernator.appspot.com/pr/dims

What’s currently running in the CI systems? http://kubernetes.submit-queue.k8s.io/#/queue

What went wrong in the last 24 hours? Kubernetes 24-Hour Test Report

Where can i find performance numbers, flaky tests on various platforms etc?

Where do folks hangout? https://github.com/kubernetes/community/blob/master/README.md#slack-chat

Hope this helps!

April 21, 2016

Quick start Kubernetes in an OpenStack Environment

Filed under: Uncategorized — Davanum Srinivas @ 8:07 am

Here’s a quick way for those who want to try Kubernetes in an existing OpenStack environment:

  • Deploy a VM using an image that has Docker built in. We pick one used by Magnum in the OpenStack CI environment
  • Use the cloud-init functionality during Nova boot process to start Kubernetes

Here’s the script we inject during Nova boot process:

#!/bin/sh

# Switch off SE-Linux
setenforce 0

# Set the name server
sudo sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'

# Get the latest stable version of kubernetes
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
echo "K8S_VERSION : ${K8S_VERSION}"

echo "Starting docker service"
sudo systemctl enable docker.service
sudo systemctl start docker.service --ignore-dependencies
echo "Checking docker service"
sudo docker ps

# Run the docker containers for kubernetes
echo "Starting Kubernetes containers"
sudo docker run \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged=true \
    --name=kubelet \
    -d \
    gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \
    /hyperkube kubelet \
        --containerized \
        --hostname-override="127.0.0.1" \
        --address="0.0.0.0" \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests \
        --allow-privileged=true --v=2

And here’s how you upload the image from Magnum into glance and then use nova boot to start it up (using the nova and glance python clients).

#!/bin/sh

export OS_REGION_NAME=RegionOne
export OS_PASSWORD=xyz123
export OS_AUTH_URL=http://172.18.184.20:5000/v2.0
export OS_USERNAME=dsrinivas
export OS_TENANT_NAME=Commons

curl -o fedora-atomic-latest.qcow2 \
    https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2

glance image-create --name "fedora-23-atomic" \
    --disk-format "qcow2" \
    --container-format=bare \
    --file fedora-atomic-latest.qcow2

nova boot \
    --key-name "k8s-keypair" \
    --flavor "m1.medium" \
    --image "fedora-23-atomic" \
    --user-data kube-init.sh \
    --config-drive true \
    "my-k8s"

Resources:

April 8, 2016

New to OpenStack Reviews – Start here!

Filed under: Uncategorized — Davanum Srinivas @ 4:15 pm

Watch this video:

Why should i do a review?

What do i look for when doing a peer review:

Also please take a look at tips here:

Find the Gerrit Dashboard for your project:

Watch this Youtube video from Austin Summit:

Join #openstack-dev IRC Channel on Freenode:

Happy Reviewing!

February 6, 2015

Deploy a Centos container in Docker for running OpenStack Nova tests

Filed under: Uncategorized — Davanum Srinivas @ 1:26 pm

First start a centos container using docker command line and drop down to the bash shell

dims@dims-ubuntu:~$ sudo docker run -i -t centos /bin/bash
[root@218b625b2529 nova]# rpm -q centos-release
centos-release-7-0.1406.el7.centos.2.5.x86_64

Install the EPEL repository

[root@218b625b2529 /]# yum -y install epel-release

Install a few things needed before we can run Nova tests as documented here:

[root@218b625b2529 /]# yum -y install git python-devel openssl-devel python-pip git gcc libxslt-devel mysql-devel postgresql-devel libffi-devel libvirt-devel graphviz sqlite-devel

We need tox as well.

[root@218b625b2529 /]# pip install tox

Get the latest nova trunk

[root@218b625b2529 /]# git clone https://git.openstack.org/openstack/nova
[root@218b625b2529 /]# cd nova
[root@218b625b2529 nova]#

Run the tests as usual

[root@218b625b2529 nova]# tox -e py27 nova.tests.unit.test_crypto

June 29, 2014

CFv2 Deployment on latest DevStack using MicroBOSH

Filed under: Uncategorized — Davanum Srinivas @ 9:32 pm

As promised, here’s a follow up to the 2 previous posts:

Let’s now deploy a full CFv2 instance using microbosh, instructions are from here:
Install Cloud Foundry on OpenStack

Here’s the edited flavor(s) that i used:
bosh-cfv2-flavors

Here’s the micro_bosh.yml for completeness:


name: microbosh-openstack
logging:
level: DEBUG
network:
type: dynamic
vip: 172.24.4.1
resources:
persistent_disk: 16384
cloud_properties:
instance_type: m1.medium
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://MY_HOST_IP:5000/v2.0
username: admin
api_key: passw0rd
tenant: admin
default_security_groups: ["ssh", "bosh"]
default_key_name: microbosh
private_key: /opt/stack/bosh-workspace/microbosh.pem
apply_spec:
properties:
nats:
ping_interval: 30
ping_max_outstanding: 30
director:
max_threads: 3
hm:
resurrector_enabled: true
ntp:
– 0.north-america.pool.ntp.org
– 1.north-america.pool.ntp.org

view raw

micro_bosh.yml

hosted with ❤ by GitHub

Here’s the cf-173-openstack.yml originally from @ferdy with minor tweaks:


<%
director_uuid = 'CHANGEME'
static_ip = 'CHANGEME'
root_domain = "#{static_ip}.xip.io"
deployment_name = 'cf'
cf_release = '173'
protocol = 'http'
common_password = 'c1oudc0wc1oudc0w'
%>
name: <%= deployment_name %>
director_uuid: <%= director_uuid %>
releases:
– name: cf
version: <%= cf_release %>
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: m1.large
update:
canaries: 0
canary_watch_time: 30000-600000
update_watch_time: 30000-600000
max_in_flight: 32
serial: false
networks:
– name: default
type: dynamic
cloud_properties:
security_groups:
– default
– bosh
– cf-private
– name: external
type: dynamic
cloud_properties:
security_groups:
– default
– bosh
– cf-public
– name: floating
type: vip
cloud_properties: {}
resource_pools:
– name: common
network: default
size: 14
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid
version: latest
cloud_properties:
instance_type: m1.small
– name: large
network: default
size: 1
stemcell:
name: bosh-openstack-kvm-ubuntu-lucid
version: latest
cloud_properties:
instance_type: m1.medium
jobs:
– name: nats
templates:
– name: nats
– name: nats_stream_forwarder
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: syslog_aggregator
templates:
– name: syslog_aggregator
instances: 1
resource_pool: common
persistent_disk: 65536
networks:
– name: default
default: [dns, gateway]
– name: nfs_server
templates:
– name: debian_nfs_server
instances: 1
resource_pool: common
persistent_disk: 65535
networks:
– name: default
default: [dns, gateway]
– name: postgres
templates:
– name: postgres
instances: 1
resource_pool: common
persistent_disk: 65536
networks:
– name: default
default: [dns, gateway]
properties:
db: databases
– name: uaa
templates:
– name: uaa
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: loggregator
templates:
– name: loggregator
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: trafficcontroller
templates:
– name: loggregator_trafficcontroller
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: cloud_controller
templates:
– name: cloud_controller_ng
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: cloud_controller_worker
templates:
– name: cloud_controller_worker
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: clock_global
templates:
– name: cloud_controller_clock
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: etcd
templates:
– name: etcd
instances: 1
resource_pool: common
persistent_disk: 10024
networks:
– name: default
default: [dns, gateway]
– name: health_manager
templates:
– name: hm9000
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
– name: dea
templates:
– name: dea_logging_agent
– name: dea_next
instances: 1
resource_pool: large
networks:
– name: default
default: [dns, gateway]
– name: router
templates:
– name: gorouter
instances: 1
resource_pool: common
networks:
– name: default
default: [dns, gateway]
properties:
domain: <%= root_domain %>
system_domain: <%= root_domain %>
system_domain_organization: 'admin'
app_domains:
– <%= root_domain %>
networks:
apps: default
nats:
user: nats
password: <%= common_password %>
address: 0.nats.default.<%= deployment_name %>.microbosh
port: 4222
machines:
– 0.nats.default.<%= deployment_name %>.microbosh
syslog_aggregator:
address: 0.syslog-aggregator.default.<%= deployment_name %>.microbosh
port: 54321
nfs_server:
address: 0.nfs-server.default.<%= deployment_name %>.microbosh
network: "*.<%= deployment_name %>.microbosh"
idmapd_domain: "localdomain"
debian_nfs_server:
no_root_squash: true
loggregator_endpoint:
shared_secret: <%= common_password %>
host: 0.trafficcontroller.default.<%= deployment_name %>.microbosh
loggregator:
servers:
zone:
– 0.loggregator.default.<%= deployment_name %>.microbosh
traffic_controller:
zone: 'zone'
logger_endpoint:
use_ssl: <%= protocol == 'https' %>
port: 80
ssl:
skip_cert_verify: true
router:
prune_stale_droplets_interval: 3000
droplet_stale_threshold: 1200
endpoint_timeout: 60
status:
port: 8080
user: gorouter
password: <%= common_password %>
servers:
z1:
– 0.router.default.<%= deployment_name %>.microbosh
z2: []
etcd:
machines:
– 0.etcd.default.<%= deployment_name %>.microbosh
dea: &dea
disk_mb: 102400
disk_overcommit_factor: 2
memory_mb: 15000
memory_overcommit_factor: 3
directory_server_protocol: <%= protocol %>
mtu: 1460
deny_networks:
– 169.254.0.0/16 # Google Metadata endpoint
dea_next: *dea
disk_quota_enabled: false
dea_logging_agent:
status:
user: admin
password: <%= common_password %>
databases: &databases
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: ccadmin
password: <%= common_password %>
– tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
– tag: cc
name: ccdb
citext: true
– tag: uaa
name: uaadb
citext: true
ccdb: &ccdb
db_scheme: postgres
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: ccadmin
password: <%= common_password %>
databases:
– tag: cc
name: ccdb
citext: true
ccdb_ng: *ccdb
uaadb:
db_scheme: postgresql
address: 0.postgres.default.<%= deployment_name %>.microbosh
port: 5524
roles:
– tag: admin
name: uaaadmin
password: <%= common_password %>
databases:
– tag: uaa
name: uaadb
citext: true
cc: &cc
srv_api_uri: <%= protocol %>://api.<%= root_domain %>
jobs:
local:
number_of_workers: 2
generic:
number_of_workers: 2
global:
timeout_in_seconds: 14400
app_bits_packer:
timeout_in_seconds: null
app_events_cleanup:
timeout_in_seconds: null
app_usage_events_cleanup:
timeout_in_seconds: null
blobstore_delete:
timeout_in_seconds: null
blobstore_upload:
timeout_in_seconds: null
droplet_deletion:
timeout_in_seconds: null
droplet_upload:
timeout_in_seconds: null
model_deletion:
timeout_in_seconds: null
bulk_api_password: <%= common_password %>
staging_upload_user: upload
staging_upload_password: <%= common_password %>
quota_definitions:
default:
memory_limit: 10240
total_services: 100
non_basic_services_allowed: true
total_routes: 1000
trial_db_allowed: true
resource_pool:
resource_directory_key: cloudfoundry-resources
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
packages:
app_package_directory_key: cloudfoundry-packages
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
droplets:
droplet_directory_key: cloudfoundry-droplets
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
buildpacks:
buildpack_directory_key: cloudfoundry-buildpacks
fog_connection:
provider: Local
local_root: /var/vcap/nfs/shared
install_buildpacks:
– name: java_buildpack
package: buildpack_java
– name: ruby_buildpack
package: buildpack_ruby
– name: nodejs_buildpack
package: buildpack_nodejs
– name: go_buildpack
package: buildpack_go
db_encryption_key: <%= common_password %>
hm9000_noop: false
diego: false
newrelic:
license_key: null
environment_name: <%= deployment_name %>
ccng: *cc
login:
enabled: false
uaa:
url: <%= protocol %>://uaa.<%= root_domain %>
no_ssl: <%= protocol == 'http' %>
cc:
client_secret: <%= common_password %>
admin:
client_secret: <%= common_password %>
batch:
username: batch
password: <%= common_password %>
clients:
cf:
override: true
authorized-grant-types: password,implicit,refresh_token
authorities: uaa.none
scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write
access-token-validity: 7200
refresh-token-validity: 1209600
admin:
secret: <%= common_password %>
authorized-grant-types: client_credentials
authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin
scim:
users:
– admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write
– services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin
jwt:
signing_key: |
—–BEGIN RSA PRIVATE KEY—–
MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1
JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6
0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB
AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA
Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0
KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J
duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE
xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8
+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek
lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h
jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh
HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+
4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=
—–END RSA PRIVATE KEY—–
verification_key: |
—–BEGIN PUBLIC KEY—–
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d
KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX
qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug
spULZVNRxq7veq/fzwIDAQAB
—–END PUBLIC KEY—–

Things to tweak:
In my cf-173-openstack.yml, i had set static_ip to 172.24.4.10. So once the MicroBOSH finished deploying CFv2, i had to find the correct vm for the router and set its floating ip to 172.24.4.10. You can do this by running “bosh vms”, look for the ip address for “router/0”, then find the correct vm in horizon or using nova list and then set it’s ip address. You need to do this before you try to use the cf command line API. Be sure to download the latest and greatest CLI from https://github.com/cloudfoundry/cli#downloads.

Flaky Stuff:
postgres vm ran into trouble multiple times, i figured out how to stop/start the shell script by hand but since other vms like cloud_controller, clock_global, cloud_controller_worker had issues it was better to whack a big hammer and run “bosh delete deployment cf” and re-instantiate the vms. (yes, i tried combinations of bosh start/recreate/restart commands as well)

Hints:
bosh-lite’s README.md is very helpful about how to build the cf release. Andy’s blogpost helped quite a bit to peel the onion for debugging as did Dr Nic’s posts. Thanks Folks!

June 24, 2014

Deploying BOSH with Micro BOSH on latest DevStack

Filed under: Uncategorized — Davanum Srinivas @ 1:33 pm

Follow up to Running Cloud Foundry’s Micro BOSH On Latest DevStack, I had to bump the VOLUME_BACKING_FILE_SIZE to 200GB in devstack as 100GB was not enough. Instructions from http://docs.cloudfoundry.org/deploying/openstack/deploying_bosh.html were handy as usual. Here’s my bosh-openstack.yml

---
name: bosh-openstack
director_uuid: 80a0b9cc-a7e4-4134-81ee-a186e8bebff8

release:
  name: bosh
  version: latest

compilation:
  workers: 3
  network: default
  reuse_compilation_vms: true
  cloud_properties:
    instance_type: m1.small

update:
  canaries: 1
  canary_watch_time: 3000-120000
  update_watch_time: 3000-120000
  max_in_flight: 4

networks:
  - name: floating
    type: vip
    cloud_properties: {}
  - name: default
    type: dynamic
    cloud_properties: {}

resource_pools:
  - name: common
    network: default
    size: 8
    stemcell:
      name: bosh-openstack-kvm-ubuntu
      version: latest
    cloud_properties:
      instance_type: m1.small

jobs:
  - name: nats
    template: nats
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: redis
    template: redis
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: postgres
    template: postgres
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]

  - name: powerdns
    template: powerdns
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]
      - name: floating
        static_ips:
          - 172.24.4.2

  - name: blobstore
    template: blobstore
    instances: 1
    resource_pool: common
    persistent_disk: 51200
    networks:
      - name: default
        default: [dns, gateway]

  - name: director
    template: director
    instances: 1
    resource_pool: common
    persistent_disk: 16384
    networks:
      - name: default
        default: [dns, gateway]
      - name: floating
        static_ips:
          - 172.24.4.3

  - name: registry
    template: registry
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

  - name: health_monitor
    template: health_monitor
    instances: 1
    resource_pool: common
    networks:
      - name: default
        default: [dns, gateway]

properties:
  nats:
    address: 0.nats.default.bosh-openstack.microbosh
    user: nats
    password: nats

  redis:
    address: 0.redis.default.bosh-openstack.microbosh
    password: redis

  postgres: &bosh_db
    host: 0.postgres.default.bosh-openstack.microbosh
    user: postgres
    password: postgres
    database: bosh

  dns:
    address: 172.24.4.2
    db: *bosh_db
    recursor: 172.24.4.1

  blobstore:
    address: 0.blobstore.default.bosh-openstack.microbosh
    agent:
      user: agent
      password: agent
    director:
      user: director
      password: director

  director:
    name: bosh
    address: 0.director.default.bosh-openstack.microbosh
    db: *bosh_db

  registry:
    address: 0.registry.default.bosh-openstack.microbosh
    db: *bosh_db
    http:
      user: registry
      password: registry

  hm:
    http:
      user: hm
      password: hm
    director_account:
      user: admin
      password: admin
    resurrector_enabled: true

  ntp:
    - 0.north-america.pool.ntp.org
    - 1.north-america.pool.ntp.org

  openstack:
    auth_url: http://173.193.231.50:5000/v2.0
    username: admin
    api_key: passw0rd
    tenant: admin
    region:
    default_security_groups: ["default", "ssh", "bosh"]
    default_key_name: microbosh

October 17, 2012

Scripts to start/stop OpenStack environment built using DevStack

Filed under: Uncategorized — Davanum Srinivas @ 4:14 pm

Work in progress…Once i bootstrapped an OpenStack install using DevStack i wanted keep/use the environment that was just built. I could not find scripts to start/stop all the services, here’s my effort. If someone has a better way, please let me know!.

#!/bin/bash

rm -rf /var/log/nova/*.log

service mysql start
service rabbitmq-server start

cd /opt/stack/glance/bin
/opt/stack/glance/bin/glance-registry --config-file=/etc/glance/glance-registry.conf > /var/log/nova/glance-registry.log 2>&1 &

cd /opt/stack/glance/bin
/opt/stack/glance/bin/glance-api --config-file=/etc/glance/glance-api.conf > /var/log/nova/glance-api.log 2>&1 &
echo "Waiting for g-api to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:9292;
do sleep 1; done"; then
        echo "g-api did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/keystone/bin
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf --log-config /etc/keystone/logging.conf -d --debug > /var/log/nova/keystone-all.log 2>&1 &
echo "Waiting for keystone to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:5000;
do sleep 1; done"; then
        echo "keystone did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-api.log 2>&1 &

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-volume --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-volume.log 2>&1 &

cd /opt/stack/cinder/bin/
/opt/stack/cinder/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf > /var/log/nova/cinder-scheduler.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-api > /var/log/nova/nova-api.log 2>&1 &
echo "Waiting for nova-api to start..."
if ! timeout 60 sh -c "while ! wget --no-proxy -q -O- http://127.0.0.1:8774;
do sleep 1; done"; then
        echo "nova-api did not start"
        exit 1
fi
echo "Done."

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-scheduler > /var/log/nova/nova-scheduler.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-cert > /var/log/nova/nova-cert.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-objectstore > /var/log/nova/nova-objectstore.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-network > /var/log/nova/nova-network.log 2>&1 &

cd /opt/stack/nova/bin 
/opt/stack/nova/bin/nova-compute > /var/log/nova/nova-compute.log 2>&1 &

cd /opt/stack/noVNC
/opt/stack/noVNC/utils/nova-novncproxy --config-file /etc/nova/nova.conf  --web . > /var/log/nova/nova-novncproxy.log 2>&1 &

cd /opt/stack/nova/bin/
/opt/stack/nova/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf > /var/log/nova/nova-xvpvncproxy.log 2>&1 &

cd /opt/stack/nova/bin/
/opt/stack/nova/bin/nova-consoleauth > /var/log/nova/nova-consoleauth.log 2>&1 &

service apache2 start
#!/bin/bash

kill -9 `ps aux | grep -v grep | grep /opt/stack | awk '{print $2}'`

service apache2 stop
service rabbitmq-server stop
service mysql stop

May 3, 2012

Developers Kit for Pure Application System

Filed under: Uncategorized — Davanum Srinivas @ 11:54 am

The deployment engine / kit for Pure Application System is available to try as a vmdk – http://ibm.co/puredevkit  There is also a developerworks guide with a 5 part series that dives deep as well. See http://ibm.co/puredevlinks to get started.

January 4, 2012

Follow a user in Lotus Connections 3.0

Filed under: Uncategorized — Tags: , — Davanum Srinivas @ 3:35 pm

One main feature in Lotus Connections 3.0 is the asymmetric follow of someone (like twitter). The API documentation is here. Since sample code is always better here’s a HTTP GET python snippet to lookup the userid, given an email and then a quick HTTP POST to follow that user.

#!/usr/bin/python
import sys,urllib,urllib2,traceback,base64
from xml.dom import minidom

xml_data_header = """
<?xml version="1.0" encoding="UTF-8"?>
<entry xmlns="http://www.w3.org/2005/Atom">
   <category term="resource-follow" scheme="http://www.ibm.com/xmlns/prod/sn/type"></category>
   <category term="profiles" scheme="http://www.ibm.com/xmlns/prod/sn/source"></category>
   <category term="profile" scheme="http://www.ibm.com/xmlns/prod/sn/resource-type"></category>
   <category term="
""".strip()
xml_data_footer = """
" scheme="http://www.ibm.com/xmlns/prod/sn/resource-id"></category>
</entry>
""".strip()

if len(sys.argv) != 4: 
        print 'Usage: follow <userid> <password> <email-of-user-to-follow>' 
        sys.exit(1)

base64string = base64.encodestring('%s:%s' % (sys.argv[1], sys.argv[2]))[:-1]

def getUuidForUser(email):
	uri = "https://w3-connections.ibm.com/profiles/atom/profile.do?format=lite&email=%s" % email
	req = urllib2.Request(uri)
	req.add_header('Content-type','application/atom+xml')
	req.add_header('Authorization', "Basic %s" % base64string)
	dom = minidom.parse(urllib2.urlopen(req))
	element = dom.getElementsByTagNameNS('http://www.ibm.com/xmlns/prod/sn', 'userid')[0]
	return element.firstChild.data

def followUser(uuid):
	uri = 'https://w3-connections.ibm.com/profiles/follow/atom/resources'
	query_string_values = {'source': 'profiles', 'type'  : 'profile'}
	payload = '%s%s%s' % (xml_data_header, uuid, xml_data_footer)

	try :
		if query_string_values:
		    uri = ''.join([uri, '?', urllib.urlencode(query_string_values)])
		req = urllib2.Request(uri, data=payload)
		req.add_header('Content-type','application/atom+xml')
		req.add_header('Authorization', "Basic %s" % base64string)
		response = urllib2.urlopen(req)
		return response.read()
	except urllib2.HTTPError, error:
		return error.read()

print followUser(getUuidForUser(sys.argv[3]))

October 8, 2011

Quick Start for WebSphere Liberty Profile – Deploying a WAR

Filed under: Uncategorized — Davanum Srinivas @ 12:58 pm

Download the WebSphere Application Server V8.5 Alpha Liberty Profile from:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/wasdev/entry/download?lang=en

Create a new web app using maven:
http://maven.apache.org/guides/mini/guide-webapp.html

Unzip the server:

dims@dims-desktop:~$ unzip -q Downloads/was4d-20110927-1211.zip

Let’s use the default server for deploying our war

dims@dims-desktop:~$ cd was4d/usr/servers/defaultServer/

See which features are enabled

dims@dims-desktop:~/was4d/usr/servers/defaultServer$ cat server.xml
<server description="new server">

<!-- Enable features -->
<!--
<featureManager>
<feature>servlet-3.0</feature>
</featureManager>
-->

</server>

Add support for servlet and jsp’s

dims@dims-desktop:~/was4d/usr/servers/defaultServer$ vi server.xml
dims@dims-desktop:~/was4d/usr/servers/defaultServer$ cat server.xml
<server description="new server">

<!-- Enable features -->
<featureManager>
<feature>servlet-3.0</feature>
<feature>jsp-2.2</feature>
</featureManager>

</server>

Create a directory for dropping in our WAR

dims@dims-desktop:~/was4d/usr/servers/defaultServer$ mkdir dropins
dims@dims-desktop:~/was4d/usr/servers/defaultServer$ cd dropins/
dims@dims-desktop:~/was4d/usr/servers/defaultServer/dropins$ cp ~/my-webapp.war .
dims@dims-desktop:~/was4d/usr/servers/defaultServer/dropins$ cd ../../../..

Start the server

dims@dims-desktop:~/was4d$ date
Sat Oct 8 13:50:50 EDT 2011
dims@dims-desktop:~/was4d$ bin/was4d start
Starting defaultServer ...
OK.

Let’s see how long it took…just a few seconds.

dims@dims-desktop:~/was4d$ date
Sat Oct 8 13:50:56 EDT 2011

That’s it, point your browser to http://localhost:9080/my-webapp/

More information can be found at:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/wasdev/entry/announcing_the_was_v8_5_alpha1?lang=en

Older Posts »

Blog at WordPress.com.