rdo-test-day-5feb2014
LOGS
07:57:42 <kashyap> #startmeeting
07:57:42 <zodbot> Meeting started Tue Feb  4 07:57:42 2014 UTC.  The chair is kashyap. Information about MeetBot at http://wiki.debian.org/MeetBot.
07:57:42 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
07:58:11 <kashyap> #topic RDO-test-day-5FEB2014
07:58:31 <yfried> #help
07:58:41 <kashyap> #meetingtopic RDO-test-day-FEB2014
07:58:50 <kashyap> Ugh, wrong command
07:59:06 <kashyap> #meetingname RDO-test-day-5FEB2014
07:59:06 <zodbot> The meeting name has been set to 'rdo-test-day-5feb2014'
07:59:19 <kashyap> yfried, You get help if you private query zodbot
07:59:42 <kashyap> yfried, Can you try what I suggested on rdo-list?
07:59:55 <yfried> kashyap: didn't understand
08:00:31 <kashyap> yfried, For your "No more mirrorors" -- can you explicitly adding the mirrors.fedoraproject.org entry in your /etc/hosts file?
08:01:14 <kashyap> nlevinki, Just a check:
08:01:26 <kashyap> Is qpid running on port 5672? -- $ netstat -lnptu | grep qpid
08:02:02 <yfried> kashyap: re-running packstack, like anand said, seems to work
08:02:25 <nlevinki> tcp6       0      0 :::5672                 :::*                    LISTEN      589/qpidd
08:02:44 <kashyap> nlevinki, You should also see an entry for both tcp & tcp6
08:02:52 <kashyap> Can you restart: $ systemctl restart qpidd
08:03:23 <kashyap> That's what you should see:
08:03:24 <kashyap> $ netstat -lnptu | grep qpid
08:03:25 <kashyap> tcp        0      0 0.0.0.0:5672            0.0.0.0:*               LISTEN      694/qpidd
08:03:25 <kashyap> tcp6       0      0 :::5672                 :::*                    LISTEN      694/qpidd
08:03:47 <nlevinki> after restart service got both tcp
08:04:57 <nlevinki> thanks, working now. now the question is who to debug why ipv4 tcp didn't start
08:21:09 <kashyap> You can check in Nova logs. Or your could use qpidd's logging abilities, from its man-page,  "'--log-enable warning+' logs all warning, error and critical messages."
08:41:47 <nlevinki> trying to configure cinder as glance default store but I get this error message, any idea "Stderr: '/bin/sh: collie: command not found\n' Disabling add method.
08:41:47 <nlevinki> 2014-02-04 09:23:44.803 617 WARNING glance.store.base [-] Failed to configure store correctly: Store cinder could not be configured correctly. Reason: Cinder storage requires a context. Disabling add method.
08:48:22 <kashyap> nlevinki, This doesn't seem to have full resolution, but might give you clues -- https://ask.openstack.org/en/question/7322/how-to-use-cinder-as-glance-default_store/
08:49:55 <nlevinki> thanks, i read it before, it doesn't work also on there page :-)
08:52:00 <kashyap> #link https://etherpad.openstack.org/p/rdo_test_day_feb_2014
08:52:58 <kashyap> # infoThe above URL can be used to post/track any notes
08:53:39 <ohochman> beagles: ping
08:56:22 <psedlak> anyone seen f20 packstack failing on mysql.pp Execution of '/sbin/service mariadb start' returned 1
08:57:10 <psedlak> it mysqld_safe fails to acces the /var/log/mariadb/mariadb.log file (Permission denied) (selinux in permissive)
08:58:07 <kashyap> ohochman, He's in Canada, must be asleep. Best is drop a message, so he might pick it async
08:59:18 <ohochman> kashyap: 10x,  do we have anyone from  packaging that awake ?
09:00:01 <kashyap> psedlak, I don't use packstack, but I've heard this thing fly-by on a bugzilla, if you have a fast internet, you might want to search
09:00:25 <kashyap> psedlak, A slightly related bz - https://bugzilla.redhat.com/show_bug.cgi?id=1034790
09:00:34 <kashyap> ohochman, Not that I know of.
09:00:34 <ohochman> HELP **foreman-server installation on rhel6.5 **  there's a dependency issue which I'm trying to workaround :  http://pastebin.test.redhat.com/189337
09:01:58 <kashyap> ohochman, You've posted an internal pastebin URL, which folks on this channel may not be able to see, can you please use http://paste.openstack.org/
09:03:00 <ohochman> kashyap: sure, you right.  http://pastebin.com/K1jQriz2
09:03:18 <nlevinki> configured a new tenant but when I run the command "keystone tenant-list"  I get this message "Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]" why ?
09:04:33 <kashyap> ohochman, Do you have EPEL repo configured? You need 'rubygem-rest-client' package
09:04:48 <psedlak> afazekas: mariadb-server.x86_64 1:5.5.34-2.fc20 was ok, mariadb-server.x86_64 1:5.5.34-3.fc20 is in the faulty run ...
09:04:51 <kashyap> Does it show up when you do: $ yum search rubygem-rest-client ?
09:05:17 <psedlak> afazekas: see 1:5.5.34-2.fc20 vs 1:5.5.34-3.fc20 ... the -2 => -3
09:07:28 <yfried> anyone seen issues with neutron-dhcp-agent?
09:07:45 <kashyap> ohochman, Or you can manually workaround (and are willing to debug gem dependency issues) by doing '$ gem install rest-client'
09:08:20 <afazekas> yfried: yes
09:08:32 <kashyap> psedlak, That's the working mariadb-5.5.34-2.fc20.x86_64 I have on my F20 set-up  (mariadb-5.5.34-1 was also buggy: https://bugzilla.redhat.com/show_bug.cgi?id=1034790)
09:08:58 <kashyap> yfried, You have to be more specific. On F20 setup, neutron-dhcp-agent works just fine
09:09:27 <yfried> afazekas: on rhel65 I'm start it
09:10:04 <kashyap> nlevinki, Did you source your keystone credentials?
09:10:58 <kashyap> If you've changed from UUID to PKI tokens, this might be handy:
09:11:04 <kashyap> #link  http://adam.younglogic.com/2013/07/troubleshooting-pki-middleware/
09:12:17 <yfried> afazekas: kashyap: rhel65 - neutron-dhcp-agent dead. can't start.
09:12:33 <yfried> detected unhandled Python exception in '/usr/bin/neutron-dhcp-agent'
09:13:37 <ohochman> kashyap: yes, I have epel.repo
09:13:57 <kashyap> yfried, Please investigate from Neutron logs to see what's going on
09:14:21 <kashyap> ohochman, See my other suggestion (try directly via 'gem' to isolate the issue)
09:14:35 <ohochman> kashyap: ok
09:14:38 * ohochman trying
09:15:51 <ohochman> kashyap: works after removing augeas :)
09:16:02 <ohochman> and fixing the epel.repo
09:16:17 <kashyap> yfried, To debug more, see the "Collecting Neutron debug info section there -- https://etherpad.openstack.org/p/rdo_test_day_feb_2014
09:16:43 <kashyap> #info To debug Neutron issues -- refer lines 7 to 13 here: https://etherpad.openstack.org/p/rdo_test_day_feb_2014
09:25:36 <psedlak> afazekas: btw this is the reason i guess https://bugzilla.redhat.com/show_bug.cgi?id=1043501
09:26:16 <psedlak> afazekas: http://pkgs.fedoraproject.org/cgit/mariadb.git/commit/?h=f20&id=7e95015252688d603e77fef488f3cca47f67623f
09:28:13 <tshefi> Glance/Gluster Backend, Reason: Unable to create datadir: /mnt/gluster/images/ Disabling add method, on api.log,  set 161 UID GID permissions on Gluster volume
09:28:54 <afazekas> psedlak: after installation : ls -l /var/log/mariadb/mariadb.log
09:28:54 <afazekas> -rw-r--r--. 1 root root 0 Feb  4 09:28 /var/log/mariadb/mariadb.log
09:29:39 <psedlak> afazekas: rpm -qf /var/log/mariadb/mariadb.log: mariadb-server-5.5.34-3.fc20.x86_64 ??
09:29:40 <kashyap> afazekas, That's what I see:
09:29:41 <kashyap> $ ls -l /var/log/mariadb/mariadb.log
09:29:41 <kashyap> -rw-r-----. 1 mysql mysql 11370 Jan  2 14:33 /var/log/mariadb/mariadb.log
09:29:51 <kashyap> With: $ rpm -q mariadb
09:29:52 <kashyap> mariadb-5.5.34-2.fc20.x86_64
09:30:15 <psedlak> kashyap: yeah, on my laptop, on updated mariadb from -2 to -3 it is same (owned by mysql:mysql)
09:30:24 <afazekas> So looks like the package installs the log file with root owner
09:30:25 <psedlak> kashyap: but that root:root happens in kind of fresh vm
09:30:50 <psedlak> kashyap: today the -3 containing http://pkgs.fedoraproject.org/cgit/mariadb.git/commit/?h=f20&id=7e95015252688d603e77fef488f3cca47f67623f fix was pushed to stable
09:30:56 <kashyap> It _could_ be a packaging bug.
09:31:07 <kashyap> Ugh, /me clicks
09:31:12 <afazekas> kashyap: it is
09:31:39 <kashyap> Please note it here, under == Known Issues ==  section: https://etherpad.openstack.org/p/rdo_test_day_feb_2014
09:33:10 <anand> /join #RDO-test-day-FEB2014
09:33:28 <kashyap> There's no such IRC channel
09:33:39 <anand> then how to join there?
09:33:47 <tshefi> yrabl, sudo yum install http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
09:35:00 <kashyap> anand, I said, there's *no* such channel. All test-day discussions happen here, as you can see!
09:35:48 <anand> allright :)
09:37:11 <ohochman> some new problem with openstack-foreman-installer  ** 'yum install openstack-foreman-installer' fails on :  "no more mirrors"  --> http://pastebin.com/APbucQL4
09:38:01 <kashyap> ohochman, Can you try adding this entry
09:38:02 <kashyap> 66.135.62.201 mirrors.fedoraproject.org
09:38:12 <kashyap> to your /etc/hosts and see if that alleviates it?
09:38:15 <ohochman> kashyap: to foreman.repo  ?
09:38:24 <yfried> ajeain: what host are you working on?
09:38:30 <kashyap> ohochman, No, to /etc/hosts
09:38:34 <ohochman> ok
09:39:24 * ohochman trying   66.135.62.201 mirrors.fedoraproject.org
09:45:30 <ohochman> kashyap_bbiab: it fails with  66.135.62.201 mirrors.fedoraproject.org  :(
09:46:14 * ohochman trying on machines locate in the U.S
09:47:22 <yfried> kashyap_bbiab: afazekas: FAILED VERSION REQUIREMENT FOR DNSMASQ
09:48:07 <yfried> dnsmasq-2.48 not enough
09:48:09 <afazekas> yfried: long time ago that was just only a warning message
09:48:29 <yfried> afazekas, well, now it's an actual blocker
09:48:40 <yfried> try /usr/bin/python /usr/bin/neutron-dhcp-agent --log-file /var/log/neutron/dhcp-agent.log --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini
09:48:45 <yfried> and see for yourself
09:58:02 <pixelb> yfried, There is an avoidance patch for that but the indentation looks mangled. I'll see if I can respin quickly
09:58:23 <yfried> pixelb: yeah, just saw this now
09:59:47 <yfried> pixelb: I think I damaged the file on my setup. where can I see the actual file online?
10:00:09 <psedlak> kashyap_bbiab: i've added link to etherpad on the workaround page
10:01:54 <pixelb> yfried, This looks wrong to me: https://github.com/redhat-openstack/quantum/commit/d46ad1dc2096781c0db650a5a647f21b0c2f6316
10:02:18 <yfried> pixelb: yep - that's the issue
10:02:25 <pixelb> OK fixing...
10:02:38 <yfried> pixelb: how long?
10:02:42 <pixelb> 10 mins
10:03:11 <yfried> pixelb: ok. I'll re-provision my setup meanwhile
10:04:08 <kashyap> psedlak, Thanks, formatted a little bit for readability.
10:04:15 <psedlak> kashyap: :)
10:06:56 <ohochman> Error in foreman-server.sh --> Error: /File[/var/lib/puppet/lib/puppet/type/neutron_metadata_agent_config.rb    -->    http://pastebin.com/rp2pYL2z
10:13:04 <kashyap> Your issue looks similar to this -- https://groups.google.com/forum/#!topic/puppet-users/Q7Jry3JAc4U
10:13:35 <kashyap> They suggest a config parameter:  configtimeout = 600  (default is 60 or 120 seconds). No clue if this fixes it for your or not
10:17:26 <yfried> pixelb: created the bug - https://bugzilla.redhat.com/show_bug.cgi?id=1061055
10:17:38 <afazekas> psedlak: keystone is ill on el6: http://www.fpaste.org/74211/15090381/
10:17:49 <yfried> afazekas, kashyap ^
10:20:12 <kashyap> yfried, Thanks, further bugs, please note them under "Bugs" section here -- https://etherpad.openstack.org/p/rdo_test_day_feb_2014. I added the above bug
10:25:03 <pixelb> yfried, Fix for that now public
10:27:58 <apevec> what's the wiki for testday today? /topic has old one
10:28:26 <apevec> ehh, no ops :(
10:28:29 <kashyap> apevec, http://openstack.redhat.com/RDO_test_day_Icehouse_milestone_2
10:28:53 <kashyap> apevec, Yeah, I wondered about it; rbowen or someone should have ops
10:29:11 <apevec> btw, next time I propose to skip testday at m2 - it's still too much flux upstream
10:29:21 <apevec> m3 is more reasonable, it's feature freeze
10:30:13 <kashyap> Sure, I'll note it in the suggestions here --  https://etherpad.openstack.org/p/rdo_test_day_feb_2014
10:36:42 <kashyap> #link Workarounds page http://openstack.redhat.com/Workarounds_2014_02
10:37:20 <kashyap> #addchair pixelb
10:37:22 <kashyap> #addchair apevec
10:42:42 <yfried> yrabl: working on rhel65?
10:47:04 <afazekas> http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ ???
10:49:26 <kashyap> #info For all new workarounds, please add them here -- http://openstack.redhat.com/Workarounds_2014_02
10:52:44 <apevec> afazekas, not-there-yet
10:52:54 <apevec> please focus on el6
11:31:45 <ohochman> apevec: ping
11:38:15 <ohochman> apevec: It seems that the puppet used by the foreman-server (that installed on RHEL6.5) attempts to connect to mysql on f20 foreman-client- machines..  (instead of connecting mariadb? )
11:38:47 <ohochman> http://pastebin.com/ydPPezCg
11:39:26 <apevec> ohochman, hmm, we'll need puppet experts here, hopefully jayg&co will join soon
11:40:00 <ohochman> apevec: ok
11:40:08 <ohochman> jistr: maybe you'll know ?^
11:40:12 <kashyap> ohochman, You can note these issues in the etherpad, and when Puppet folks join, you can point them to it
11:40:16 <apevec> so it's mixing client/server in conditionals?
11:40:27 <kashyap> (Instead of explaining whole questions again. Just noting.)
11:40:51 <ohochman> kashyap: what's the etherpad link ?
11:41:03 <kashyap> ohochman, https://etherpad.openstack.org/p/rdo_test_day_feb_2014
11:41:08 <ohochman> kashyap: 10x
11:41:28 <ohochman> apevec: not sure what do you mean,
11:41:59 <apevec> ohochman, just guessing, it selects mysql based on fact that server is rhel ?
11:42:34 <apevec> iirc there's case statement in mysql puppet module
11:43:46 <ohochman> apevec: I'm not sure that the puppet modules foreman using even support f20?
11:47:06 <apevec> hmm, good question
11:47:33 <apevec> mmagr, ^^^ is mysql module forked or upstream in puppet modules RPM ?
11:48:51 <apevec> upstream knows about mariadb in fedora >= 19 https://github.com/puppetlabs/puppetlabs-mysql/blob/master/manifests/params.pp
11:50:04 <apevec> ohochman, question is where is that conditional evaluated
11:50:32 <ohochman> apevec: I was told that F19 should not be covered in this test-day (no packaging..)
11:51:02 <apevec> ohochman, I was just mentioning it in the context of above module
11:51:04 <apevec> no implication
11:51:12 <apevec> and yes, no f19 today
11:51:22 <ohochman> ok
11:52:27 <panda> oh, puppet is still using /sbin/service on f20 .. :/
11:59:00 <jistr> apevec, ohochman: the conditionals are evaluated on the machine which is being provisioned, so that should be ok
11:59:35 <ohochman> jistr: what does it mean being provisioned?
11:59:46 <apevec> jistr, that's what I'd expect, what is failing then for ohochman ?
11:59:54 <jistr> ohochman: sorry, i meant where puppet is being run
12:00:30 <ohochman> I have this puppet version  : puppet-3.4.2-1.fc20.noarch
12:00:30 <jistr> and packstack master links to this commit of the mysql module https://github.com/packstack/puppetlabs-mysql/blob/83abc4556bbf6745708c08375649c9d71b6f66db/manifests/params.pp
12:00:43 <jistr> so that looks ok too...
12:03:07 <jistr> ohochman: do you have a log what happened before the pastebin?
12:03:17 <jistr> the first warning there is "Warning: /Stage[main]/Mysql::Server::Account_security/Database_user[root@oh-havana-controller]: Skipping because of failed dependencies"
12:03:33 <ohochman> jistr: let me check
12:03:40 <jistr> and so there should be some "Error: " message earlier than that
12:04:38 <ohochman> jistr: I can try look it up at /var/log/messages
12:05:26 <ohochman> jistr: this should be a bit longer : http://pastebin.com/t4jj9L9P
12:05:35 <blinky_ghost> Hi all, anybody can help me implementing l3-agent high availability on rdo havana?
12:05:40 <ohochman> jistr: but I'll try to chack /var/log/messages
12:06:11 <jistr> Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql --defaults-file=/root/.my.cnf mysql -Be describe user' returned 1: Could not open required defaults file: /root/.my.cnf
12:06:11 <jistr> Fatal error in defaults handling. Program aborted
12:06:18 <jistr> this looks like the problem ^
12:06:30 <ohochman> jistr: yhep
12:07:35 <ohochman> jistr: [root@oh-havana-controller ~]# find / -name my.cnf
12:07:36 <ohochman> /etc/my.cnf
12:07:41 <ohochman> not under /root
12:08:49 <ohochman> strange.. this file ^^ contains mysql details (on f20)
12:09:04 <ohochman> again not mariadb...
12:11:33 <mmagr> apevec, Hi Alan, currently we are using our fork
12:18:31 <afazekas> ohochman: /root/.my.cnf  # the config file names in home are usually stating with a dot
12:19:01 <ohochman> afazekas: ok- so this file cannot be found.......
12:26:30 <ohochman> beagles: can you check on https://etherpad.openstack.org/p/rdo_test_day_feb_2014  --> == Openstack-Foreman-Installer == --> issue #3 ?
12:28:42 <ukalifon> mmagr: Packstack on RHEL65 tries to update the kernel: err: /Stage[main]/Packstack::Netns/Exec[netns_dependecy_install]/returns: change from notrun to 0 failed: yum update -y kernel iputils iproute returned 1 instead of one of [0] at /var/tmp/packstack/b8056c3f40ee405f862a6dff12b6df64/modules/packstack/manifests/netns.pp:14
12:29:22 <mmagr> ukalifon, and what is "yum update -y kernel iputils iproute: saying?
12:30:01 <ukalifon> mmagr: I didn't try to run it manually. Why is it required to update the kernel?
12:30:53 <mmagr> ukalifon, for some reason netns test failed for you
12:31:24 <mmagr> ukalifon, so packstack is trying to get packages required for netns
12:34:18 <ukalifon> mmagr: running it manually gives me:
12:34:18 <ukalifon> 153 packages excluded due to repository priority protections
12:34:18 <ukalifon> Setting up Update Process
12:34:18 <ukalifon> No Packages marked for Update
12:36:16 <mmagr> hmm, but this should return 0
12:36:26 <mmagr> ukalifon, ^
12:36:54 <ukalifon> mmagr: it returns 0
12:37:52 <mmagr> ukalifon, ok so there had to be some problem during puppet run, so just rerun packstack with your answer file
12:38:12 <ukalifon> mmagr: thanks
12:38:17 <mmagr> ukalifon, np
12:38:19 <kashyap> #addchair mmagr
12:38:30 <mmagr> ?
12:38:47 <afazekas> psedlak: looks like on f20 the server tries to log into the /var/log/mysqld.log may the puppet module changed the log dir ?
12:38:48 <kashyap> mmagr, Heya, just added, if you need to post info/url/links, you can use this -- http://fedoraproject.org/wiki/Zodbot
12:39:34 <kashyap> Zodbot is running, so, at the end of test day, URLs/Info/Ideas (if they're used reasonably consistently :-) ) will be collected nicely in one place.
12:39:55 * kashyap off to go make dinner
12:39:55 <mmagr> afazekas, well yep, only service name is currently patched to mariadb
12:40:04 <beagles> hi ohochman: I've run into something like that a few times. running "yum clean all" before running the foreman scripts worked each time
12:40:22 <nmagnezi> any of you tried to use 'packstack --allinone' ? I get an error that packstack failed to install myriadb (which I did not ask for): http://fpaste.org/74251/13915175/
12:41:10 <afazekas> mmagr: the problem is the log file is not exists and mysql and the mariadb does not have permission to create it in /var/log
12:42:36 <mmagr> afazekas, ah crap ... I did not realize that it might not exist :) ... well then yeah, will fix that
12:44:23 <ohochman> nmagnezi: see ukalifon mail about it..
12:44:34 <afazekas> mmagr: I wonder why it first starts the server, than restart it
12:45:19 <mmagr> afazekas, mysql/mariadb server?
12:45:31 * afazekas on starting we had this issue:  https://bugzilla.redhat.com/show_bug.cgi?id=1061045
12:46:36 <afazekas> mmagr: looks like during installation, first the mysql server get started, then configuration changes, then the service will be restarted
12:46:58 <psedlak> afazekas: do we need workaround also for the '/var/log/mysqld.log'?
12:47:19 <afazekas> psedlak: yes
12:47:21 <psedlak> afazekas: and btw you can add it to the testday page/etherpad too i guess
12:47:38 <afazekas> Probably you just need to create it mysql:mysql
12:48:55 <psedlak> afazekas: are you sure about the path to mysqld.log? no subdir etc?
12:49:22 <afazekas> from the my.cnf:  log_error          = /var/log/mysqld.log
12:49:59 <afazekas> WTF
12:50:38 <afazekas> psedlak: http://www.fpaste.org/74253/18227139/
12:52:21 <afazekas> Probably the service runs under mysql user that time
12:58:16 <afazekas> Why that service scripts attempts create the log file with correct correct permissions, if he does not have enough permission for doing that ?
13:45:38 <weshay> so AIO icehouse installs have been removed from the tested-setups page.  Just double checking that is what *we* want
14:00:38 <defishguy> I have a "best practice" question.  I'm running havana on Cent6.5 with 3 nodes (2x compute). Originally the instances were stored on the local file system, but now I have a requirement to move them to an nfs mount so that we can migrate etc.  I'm new to this.... What's the best way to go about this?
14:01:12 <larsks> Good morning, RDO test day people...
14:04:42 <ajeain> larsks: good morning to you :)
14:07:12 <defishguy> Is it really as simple as changing the backend from lvm to nfs and mounting?
14:09:00 <larsks> defishguy: If everything is available at the same paths, that should Just Work, although you will need to shut down and restart any instances.
14:11:30 <defishguy> larsks:  Thank you.
14:20:52 <ohochman> jayg|g0n`: ping
14:23:22 <jayg> ohochman: pong
14:24:45 <ohochman> jayg:  hi , you had a chance to look at https://bugzilla.redhat.com/show_bug.cgi?id=1061152
14:25:18 <jayg> ohochman: yeah, I just read it in my email, fedora 20 is not currently supported for foreman installations
14:25:49 <jayg> afaik foreman itself only runs on f19, and for astapor, we have not yet conditionalized the rhel scl stuff to make the installer work on fedora period
14:25:53 <ohochman> jayg: as foreman_client  ?
14:26:11 <jayg> there is no mariadb support, that is a rhos 5 rfe
14:27:04 <jayg> the whole setup is pretty much targetted to rhel/centos atm, once we have time we'll add fedora support, but there are only 2 of us working on it
14:27:19 <ohochman> jayg: so,  only RHEL6.5 for using foreman scenario - for both foreman-server and foreman-clients. ?
14:27:42 <jayg> ohochman: yes
14:27:56 <jayg> if someone addedfedora to the test page, that was an error, I did not
14:28:30 <ohochman> jayg: I have no problem with that..  just that this information wasn't publish and clear to us.
14:29:03 <ohochman> pmyers: ^
14:30:25 * jayg checks test page - did you see openstack-foreman-installer rpms in fedors repos?  they have never been there before
14:31:27 <jayg> ohochman: this is the only page I have ever updated that I recall for test days, fwiw: http://openstack.redhat.com/TestedSetups#Advanced_Installs_.28Foreman_Based.29_--_Work_in_Progress
14:32:28 <ohochman> jayg: well I guess rhel.6.4 should be remove from there as well. ^
14:33:27 <ohochman> jayg: this is an old RDO testing table.
14:33:41 <jayg> sure, it is a wiki, feel free to update ,if not, I'll try to look later
14:34:34 <ohochman> jayg: anyways I was here yesterday asking  about what should be covered in this test-day (the foreman side)
14:34:51 <jayg> did you ping me?  I must have missed that
14:35:35 <ohochman> jayg: (no, I pinged pmyers) I was told that only the foreman-server should be installed on rhel6.5 and the clients can be tested on  F19 , F20
14:36:22 <ohochman> jayg: just saying we should have build a testing table for what to be cover in the test-day .
14:37:16 <jayg> ohochman: sure, that is erroneous information, and to my knowledge nobody has tried foreman client on fedora of any version, though I suspect not much woudl be needed to make it work
14:37:22 <ohochman> jayg:  what about rhel7 ?
14:38:05 <jayg> the main thing when applying a configuration like you did, would be whatever tweaks are needed in the upstream puppet code (openstack) to support mariadb, and frankly I haven't looked at that in some time, due to more pressing items
14:38:20 <ohochman> jayg: Ok , so.  I've tried that (foreman client ) and it doesn't work on fedora.
14:39:09 <jayg> yeah, let me find the BZ requesting mariadb support - once that is there, I think fedora on a client would work fine
14:39:30 <ohochman> jayg: Ok so we have this one as a reminder for ice-house : https://bugzilla.redhat.com/show_bug.cgi?id=1061152
14:39:59 <jayg> http://openstack.redhat.com/TestedSetups#Advanced_Installs_.28Foreman_Based.29_--_Work_in_Progress
14:40:02 <jayg> rr, sry
14:40:06 <jayg> https://bugzilla.redhat.com/show_bug.cgi?id=1017210
14:40:48 <ohochman> jayg: in rhos5.0 we have it as RFE on foreman.
14:41:02 <ohochman> jayg: yes this one ^^ https://bugzilla.redhat.com/show_bug.cgi?id=1017210
14:43:27 <jayg> ohochman: to be clear, foreman itself is unlikely to run on fedora 20 anytime soon, as I understand it, due to rails versioning issues
14:43:59 <ohochman> jayg: what about the client side ?
14:44:02 <jayg> but once the mariadb rfe is done, it will make client version a bit more lenient
14:44:13 <ohochman> jayg: it should work for RHEL7 anyways..
14:44:22 <jayg> as far as rhel 7, I don't see why it wouldn't work, but I have never tried it
14:44:54 <jayg> our target to date has been firmly rhel 6.latest
14:45:03 <ohochman> but not much of a different if it works on rhel7  it can work on fedora , no ?
14:45:38 <jayg> if rhel 7 has only mariadb, then I would expect that to fails as well
14:45:41 <ohochman> you talking about the foreman-server ..  only rhel6-latest ..
14:46:11 <jayg> yeah, but for rhel we have scls, so foreman team can control their rails version
14:46:12 <ohochman> but the foreman-client should work on rhel7/f19/f20
14:46:25 <jayg> modulo mariadb, yes
14:47:09 <ohochman> jayg: ok .
14:48:07 <jayg> ohochman: sorry for the confusion, please ping me directly with questions in irc or email, otherwise I am unlikely to see, I am in too many channels, and too heads down on actual coding tasks
14:55:54 <lon> apevec: I collected some dependency notes on installing openstack-nova on el7
14:56:07 <apevec> where?
14:56:11 <lon> apevec: one sec, I'll paste them
14:56:50 <apevec> I'm adding some missing deps for *client, just got few requested epel7 branches created
14:56:58 <lon> http://titanpad.com/k3fUnioHQN
14:57:50 <lon> I'll keep adding stuff and updating my dependencies-needed list
14:58:02 <lon> I'm sure most are known
14:58:14 <lon> but it's helpful to go through stuff, I think
14:59:42 <lon> apevec: One specfile needed tweaking
15:00:07 <lon> it has a %define with_python3 1 -> 0, but other than that it was straight grab-from-koji-and-rebuild
15:00:13 <lon> so far
15:00:17 <weshay> ohochman, jayg it looks like the fedora20 mariadb issue is caused by permissions to the /var/log/mariadb/mariadb.log file
15:00:22 <lon> I'll install the nova sub-rpms now and see what happens
15:00:28 <apevec> lon, yeah, that's what I'm fixing now in epel7 branches
15:00:31 <weshay> if you open up the permissions on it.. you can restart the service
15:00:37 <lon> cool
15:01:04 <lon> apevec: ok, if I find more, I'll keep updating that pad
15:01:09 <apevec> lon, where did you find that titanpad ?
15:01:21 <lon> google
15:01:22 <apevec> btw, there's etherpad.openstack
15:01:29 <lon> google 'free etherpad'
15:01:29 <apevec> don't trust google
15:01:43 <apevec> esp not results with free keyword :)
15:01:44 <lon> I figuerd other RDO users might be trying RHEL7 beta
15:01:49 <lon> :]
15:02:34 <apevec> lon, kashyap created earlier https://etherpad.openstack.org/p/rdo_test_day_feb_2014
15:02:46 <lon> oh, I see
15:02:50 <lon> I'll move things around
15:03:10 <apevec> better for branding to use foundations resources :)
15:03:39 <apevec> lon, re. pyparsing - that's el7 base - is there rhbz for missing dep?
15:03:51 <lon> it's in el7 base
15:04:08 <lon> see notes - we need 2.0.1 to match dependencies; 1.5.6 is in el7 base
15:04:10 <apevec> oh wait, you're rebuilding it
15:04:44 <apevec> lon, yeah, I'm building older cmd2/cliff to avoid dep on newer pyparsing
15:04:52 <apevec> it's enough for openstack requirements
15:05:05 <jayg> weshay: so that is just an issue with the mariadb rpm on fedora 20 itself?
15:05:09 <lon> ok
15:05:22 <weshay> ya
15:05:27 <lon> apevec: thanks, I'll update my table of dependencies
15:05:31 <jayg> lovely
15:05:58 <weshay> https://bugzilla.redhat.com/show_bug.cgi?id=1061045
15:06:00 * jayg wonders if upstream openstack puppet supports mariadb yet, haven't looked in a while - is that what people are testing with packstack?
15:06:21 <apevec> lon, re. jsonschema - mrunge fixed that earlier
15:06:51 <yrabl> kashyap, ping
15:08:40 <lon> apevec: moved: https://etherpad.openstack.org/p/nova-deps-epel7-rdo-i-td2
15:08:58 <apevec> lon, good boy!
15:09:19 <apevec> I'll update when I get proper epel7 builds done
15:11:40 <lon> jsonpointer samkottler built
15:11:41 <weshay> is the incorrect version of python-backports getting installed for anyone other than myself :) ?
15:14:20 <apevec> samkottler you too fast
15:15:00 <apevec> now we need wait-repo
15:16:06 <lon> cool
15:16:15 <lon> apevec: no other deps after installing all nova sub-rpms
15:16:23 <apevec> koji wait-repo epel7-build --build python-jsonpointer-1.0-2.el7
15:17:45 <lon> keystone's good
15:18:09 <apevec> samkottler, I'll resubmit 6489486
15:19:14 <lon> o.O
15:19:19 <lon> No package scsi-target-utils available.
15:22:10 <apevec> lon, where did that go?
15:22:24 <apevec> included in systemd? :P
15:23:03 <lon> openstack-swift
15:23:07 <lon> er
15:23:08 <lon> I think
15:23:12 * lon looks at history
15:23:28 <lon> cinder
15:23:35 <lon> requires python-swiftclient and scsi-target-utils
15:23:46 <lon> you know you're in dependency heck when you get this far
15:23:49 <apevec> samkottler, ok, I cannot resubmit, so I'll just rebuild unless you're running it now?
15:24:33 <lon> then you find that scsi-target-utils needs ceph-devel, which needs ceph, which needs libtcmalloc, which is provided by gperftools, which ... needs libXaw3D?!!!
15:24:48 <apevec> wow
15:25:01 <lon> also ghostview
15:25:03 <lon> ...
15:25:09 <lon> magical!
15:47:43 <ohochman> morazi: when running foreman_client.sh I'm getting the following : Error: /File[/var/lib/puppet/lib/puppet/type/neutron_network.rb]/ensure: change from absent to file failed: execution expired
15:47:44 <ohochman> Error: Could not retrieve plugin: execution expired
15:48:09 <ohochman> morazi: seems harmless but still.. error.
15:50:19 <ohochman> jayg: ^
15:51:02 <jayg> execution expired, hmm, never seen that error before
15:51:25 <jayg> ohochman: what hostgroup are you trying to apply?
15:52:19 <ohochman> jayg: Got another one just at the end -->  Error: /File[/var/lib/puppet/lib/puppet/provider/vcsrepo/cvs.rb]/ensure: change from absent to file failed: execution expired
15:52:20 <ohochman> Error: Could not retrieve plugin: execution expired
15:53:22 <jayg> ohochman: hmm, I'll google what that error means, but neither of those are directly quickstack code
15:53:59 <ohochman> jayg: ok,  let me know if we need to open RDO bug for it.
15:54:11 <jayg> ohochman: can you try a second run and see if the error repeats?
15:54:24 <jayg> quick google looks so far like a timeout of some kind
15:54:38 <ohochman> jayg: the client seems to be registered to the foreman-server with no problem.
15:55:23 <ohochman> jayg: Ok  I'm trying to run again the foreman_client.sh ..
15:56:24 <ohochman> jayg: no error the second run.
15:56:46 <jayg> oh, you got it on initial registration?  I guess if you were to get it, that would make sense, since it pulls down the whole catalog
15:59:03 <jayg> ohochman: any chance you have multiple clients trying to register at the same time?  loos like that can contribute to this
15:59:22 <morazi> ohochman, k, thanks for the heads up.  I have never seen that either.  but it sound like an agent timing out while trying to sync up with the puppet master
15:59:28 <jayg> especially as I believe the foreman proxy is still on web brick, which is not exactly performant
15:59:32 <jayg> http://grokbase.com/p/gg/puppet-users/1387v5yek8/puppet-first-run-timing-out
15:59:51 <jayg> if anything, this may be a bug against foreman itself
16:00:20 <jayg> though the case there is 'move to a modern web server', imo...
16:01:04 <ohochman> jayg: Yes. it's  2 client at the same time ...
16:01:34 <jayg> ok, so web brick can only serve one request at a time, that is probably it
16:01:50 <ohochman> jayg: we suppose to be able to scale up the client registration ?
16:02:05 <ohochman> (not 1 client at a time..)
16:02:29 <ohochman> bug on foreman?
16:03:08 <jayg> perhps, let me make sure I am right about it being web brick - from that thread:
16:03:09 <jayg> 'The master's built-in "webrick" server support
16:03:12 <jayg> serving only one client at a time, even if the resources available on the
16:03:14 <jayg> master's host would otherwise be sufficient to handle more.'
16:03:45 <ohochman> jayg: I guess you nailed it.
16:04:16 <ohochman> jayg: on that thread -  I had another error :
16:04:27 <ohochman> Error: Could not retrieve plugin: execution expired
16:04:39 <ohochman> not saying much I gues.s
16:06:01 <jayg> ohochman: what do you mean by 'on that thread'?  the link I sent, or output from your agent?
16:06:47 <ohochman> jayg: on my agent output .. -  It was two clients running foreman_clients.sh at the same time.
16:07:07 <jayg> ah, yeah, so consistent at least
16:07:23 <ohochman> different errors on the screen .. but they both manage to register well against the server.
16:07:52 <ohochman> so maybe those errors are harmless..
16:08:20 <jayg> yes, I suspect they would self-correct when you applied a hostgroup anyway
16:08:29 <ohochman> I'm testing now the neutron hostgroups - we'll see if that works.
16:21:50 <giulivo> for people running on f20 with selinux enabled, I think we also need "chcon -u system_u -r object_r -t mysqld_log_t /var/log/mysqld.log" ; I added it in the etherpad
16:26:22 <ohochman> qum5net
16:29:00 <jayg> ohochman: I need to run an errand- in case you ping me and I dont respond, that is why - cwolfe should be on very shortly as well, and he can answer technical issues if I am not here
16:29:19 <ohochman> jayg: Ok,
16:29:41 <ohochman> jayg: so far I managed to deploy neutron controller
16:29:59 <ohochman> now deploying the other networker and compute.
16:30:52 <jayg> ohochman: excellent!  this is against icehouse?
16:31:18 <ohochman> jayg: yes. of course.
16:31:28 <jayg> great
16:35:54 <nmagnezi> ohochman, kashyap where do we keep track of bugs?
16:36:17 <ohochman> nmagnezi: https://etherpad.openstack.org/p/rdo_test_day_feb_2014
16:36:28 <ohochman> under ==bugs==
16:37:09 <nmagnezi> ohochman, thanks
16:38:08 <afazekas> http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/ I do not see new swift package for el6
16:39:20 <ohochman> jayg: strange.. now I'm installing only one foreman-client (not 2) and getting this : Error: /File[/var/lib/puppet/lib/puppet/type/cinder_api_paste_ini.rb]/ensure: change from absent to file failed: execution expired
16:39:20 <ohochman> Error: Could not retrieve plugin: execution expired
16:55:49 <ohochman> jayg: morazi: it worked - so, we have the neutron scenario covered in ice-house (rdo)..  but only against rhel6.5 . any other OS fails.
17:00:32 <morazi> ohochman, nice on rhel6.5.
17:01:21 <morazi> ohochman, I'm curious on the failures on the other platforms (though we know no foreman in f20 and I don't think we provided builds for f19 at this point) -- what are the symptoms on the other OSes?
17:01:24 <ohochman> morazi: yhep , from some reason I thoought we are ready to test rhel7 and f20
17:02:24 <ohochman> morazi: you mean f20 ? we had a problem to connect to mariadb : https://bugzilla.redhat.com/show_bug.cgi?id=1061152
17:10:34 <morazi> jayg, ^^ I think that one is going to be an interesting quandry.  iirc the puppet modules in packstack have a switch in there to toggle mariadb on fedora installs.  Granted, I'm not sure what we actually want to do w/r/t mixed environments like that...
17:10:54 <morazi> jayg, we may not need to solve it right now, but probably something to think about.
17:34:05 <kashyap> nmagnezi, Hi, in the etherpad --
17:34:18 <kashyap> nmagnezi, https://etherpad.openstack.org/p/rdo_test_day_feb_2014
17:34:41 <nmagnezi> kashyap, yup :) Omri already pointed it out, going to add another bug to the list soon
17:35:00 <kashyap> nmagnezi, Please don't forget to click the star mark, to save a revision, if case you made a lot of edits
17:35:13 <kashyap> Cool (or not :-)).
17:35:28 <nmagnezi> kashyap, just adding bugs to the list, nothing more than that :)
17:39:06 <larsks> kashyap: We have a list of bugs in the etherpad...and in the wiki?  Are they different?  Can we consolidate on one location?
17:39:37 <kashyap> larsks, The wiki is for "workarounds".  Etherpad is light-weight. Sure, we can add work-arouds
17:39:47 <kashyap> Ugh, thinko:
17:39:53 <larsks> I figure it would just be nice to have a single list of bugs somewhere.
17:40:02 <kashyap> (I meant - sure, we can consolidate)
17:40:22 <larsks> Or maybe some sort of keyword in bugzilla (because then we can *generate* the list, which seems nice).
17:40:32 <kashyap> Yeah, etherpad because, as they say - everyone is in a hurry on the internet, and folks don't bother. If it's etherpad, at-least they can just dump and save a revision
17:40:53 <kashyap> True, sir.
17:41:04 <kashyap> #addchair larsks
18:09:02 <verdurin> Installation proceeded without errors on CentOS 6.5
18:09:19 <verdurin> However, I'm testing with Neutron, which I haven't used before (allinone)
18:09:55 <verdurin> Is it expected that with the admin project the instance won't pick up an IP?
18:11:08 <verdurin> I.e. with Neutron, do you have to do extra configuration before it works at all, using Packstack?
18:16:28 <larsks> verdurin: With a --allinone install I think it should mostly just work.  If you run "neutron net-list", do you have networks defined?
18:17:05 <larsks> ...actually, I think that for --allinone, you may need to boot using the "demo" tenant to have a private network available.
18:18:53 <verdurin> larsks: yes, I've just tried with the 'demo' tenant now, and that has both public and private networks
18:22:01 <verdurin> larsks: I think the routing's wrong because pings to that host receive a reply from a completely different machine. Hmm.
18:22:19 <verdurin> I don't think that's an RDO/Packstack problem per se.
18:32:44 <blinky_ghost> Hi all, anybody can help me implementing l3-agent high availability on rdo havana?
18:34:11 <verdurin> larsks: thanks anywa, I'll take another look tomorrow
18:35:15 <larsks> verdurin: No worries.  If you want to look at it in more detail, let me know.
18:35:32 <larsks> blinky_ghost: Have you seen the HA guide on openstack.org?
18:36:00 <blinky_ghost> larsks: yes, but it doesn't work
18:36:23 <larsks> Bummer.  I haven't set up HA myself, so I probably won't be much help.
18:37:31 <blinky_ghost> larsks: I've downloaded the pacemaker plugin from https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3 but it doesn't move the router to the new host
18:38:37 <Pursuit[LT]> Hey all, I'm having some difficulty with Neutron+OVS+VLANS+Provider networks. I followed the guide on the RDO site, but packstack seems to get hung on testing if neutron.pp is finished. Seems to be an scp that just sits there forever (24+ hours).
18:38:41 <Pursuit[LT]> Any pointers?
18:47:47 <larsks> blinky_ghost: Looking at HA is on my short list for next week...if I get anywhere I'll check in with you and see if you still need help.
18:48:18 <blinky_ghost> larsks: ok, thanks.
19:20:54 <jayg> Pursuit[LT]: I could be remembering wrong, but I thought I saw at one point to just rerun with your answer file if you get that kind of hang
19:38:04 <Pursuit[LT]> jayg: I think I found the problem. scp seemed to be stuck at trying a reverse name lookup. Setting a name in the hosts file fixed it
19:38:55 <jayg> ah, cool
19:39:08 <Pursuit[LT]> Now my issue is that the interface that I'm using for the tenant traffic is also the one with the ip for the host, and it gets reconfigured in such a way that it no longer provides network access during the OVS setup
19:39:17 <Pursuit[LT]> causing all other modules to fail
19:41:08 <jayg> is the ip being moved to br-ex or similar?
19:44:25 <Pursuit[LT]> No, it stays on the original interface
19:44:38 <Pursuit[LT]> I'm using provider networks FYI
19:45:39 <jayg> this is likely a question for beagles or someone else who is very familiar with neutron config + packstack
20:07:41 <beagles> Pursuit[LT], high.. basically whatever physical interface you are using for the tenant traffic is going to be bridged and, consequently, lose the configured IP
20:08:26 <beagles> Pursuit[LT], one of the approaches being used to date is to create a bridge as part of the system configuration and have the bridge itself get the IP address
20:11:36 <beagles> Pursuit[LT], you can then have neutron use that bridge for outgoing connections. This will leave the IP address accessible to the processes running on the node and while allow neutron to create the necessary bridge connections
20:12:27 <beagles> there is some talk about the general technique here: http://openstack.redhat.com/forum/discussion/577/howto-packstack-allinone-install-with-neutron-and-external-connectivity/p1
20:16:32 <Pursuit[LT]> beagles: ah, ok, I may give that a shot. Currently I'm trying just moving everything to the other interface aside from the tenant traffic. will report back with success/failure
20:17:05 <beagles> Pursuit[LT], k cool
20:19:52 <jayg> beagles: thanks for that link - semi-related, do you happen to know if the puppet docs in the various modules (like neutron) are published anywhere?  Some of them have pretty decent descriptions, and seems like that would be a handy reference
20:23:34 <beagles> jayg: mmm, you mean pulled together like a kind of compendium?
20:24:53 <jayg> beagles: yeah, sort of - thinking like ruby has rubydoc for each gem - it is just the published documentation from libraries
20:25:28 <jayg> I did a bit of googling and found nothing, so I think it doesn't exist, at least publicly
20:25:46 <larsks> jayg: There is a "puppet doc" command that will generate documentation from your manifests.
20:25:56 <larsks> ...so you could build it and host it somewhere :)
20:26:10 <jayg> larsks: exactly, and that shoudl be published somewhere public for reference
20:26:18 <jayg> well, github comes to mind
20:26:20 <larsks> jayg: Yes, seems like a good idea.
20:26:27 <jayg> the projects are already there, and they host static content for free
12:11:10 <anand> Hi guys. my setup is node1(controller+compute) node2 (compute) , if I launch any instance it is stored in node 1 , how can I launch an instance whose hypervisor host is node2 (compute)
12:14:39 <anand> http://docs.openstack.org/user-guide-admin/content/specify-host-to-boot-instances-on.html
12:20:04 <anand> I followed above doc but I can't get my instanced launched as it end with error. how to select the host where instances are launched?
12:24:38 <anand> error : u'message': u'No valid host was found. ', u'code': 500, u'created': u'2014-02-05T12:16:35Z'} | | OS-EXT-STS:power_state
12:44:04 <oblaut> ohochman: ping
12:44:30 <ohochman> oblaut: Yes, the problem with the foreman-proxy
12:44:52 <ohochman> I saw it before. but it not happened to me for a while.
12:45:49 <oblaut> ohochman: how do i fix it ?
12:46:29 <ohochman> oblaut: you can use my foreman-server (it's clean. ) or talk to dcleal about it in #rhos-dev
12:48:28 <ohochman> the proxy error (during foreman_server.sh) : Error: /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[puma04.scl.lab.tlv.redhat.com]: Could not evaluate: Connection refused - connect(2) Notice: Finished catalog run in 300.85 seconds
13:23:54 <ukalifon> ayoung: ping
13:29:48 <panda> I'm trying to test a 2 node installation rhel6.5 using vxlan but I'm having some problems
13:30:54 <panda> the cirros instance is not receiving any dhcp response . On the compute node a tcpdump on tap interface and even on br-int show a correct discover packet
13:31:35 <panda> but on br-tun already I don't see any activity at all, and that should be a step before the use of vxlan
13:32:25 <panda> ovs-vsctl show patch-int and patch-tun peered correctly
13:42:11 <yrabl> xqueralt, ping
13:42:27 <xqueralt> hi yrabl, what's up?
13:42:58 <yrabl> xqueralt, well :) how are you?
13:46:28 <xqueralt> yrabl, good, anything I can help you with?
13:47:47 <yrabl> xqueralt, sure: I have an instance in error status and I can't delete it, detach the volumes from it, create a snapshot of the volumes, or copy their content to a new volume...
13:48:08 <yrabl> xqueralt, the nova delete or force-delete don't work
13:48:37 <mpavlase> Hi, have anyone tried within RDO-testday to successfully install packstack on RHEL 6.5?
13:49:58 <panda> mpavlase: I tried, successfully
13:51:49 <xqueralt> yrabl, can you paste the compute and api logs somewhere?
13:53:02 <mpavlase> panda: My installation failed because unsatisfied dependecy, rubygems package.. can you please run $ rpm -q rubygems and send me your output?
13:54:33 <panda> mpavlase: rubygems-1.3.7-5.el6.noarch
13:54:50 <mflobo> Anyone have any idea of this problem? https://ask.openstack.org/en/question/11458/keystone-all-importerror-cannot-import-name-deploy/
13:57:46 <kashyap> mflobo, You might want to post relevant log snippets in the question there, as pastebins expire
13:58:23 <panda> mpavlase: installed from "optional" repo
13:59:24 <mpavlase> panda: aaha, thanks, probably that would be the missing thing
13:59:36 <kashyap> mflobo, Just guessing - just ensure if you have the right versions of -- python-pastedeploy and python-paste
13:59:44 <giulivo> kashyap, did you install some m2 using foreman?
14:00:01 <kashyap> giulivo, Not Foreman.
14:03:58 <kashyap> #info [Heads-up] There's a regression in libvirt F20 (_if_ you're using polkit ACLS. Shouldn't affect the test day) -- https://bugzilla.redhat.com/show_bug.cgi?id=1058839
14:04:11 <ayoung> ukalifon, I'm here
14:04:12 <kashyap> (There's a patch proposed for it, upstream)
14:04:33 <kashyap> (s/ACLS/ACLs)
14:06:12 <ukalifon> ayoung: Hi Adam. I am just starting with Icehouse and noticed that a lot has changed in the API. I am not able to list anything (projects, users, domains..)
14:06:45 <ukalifon> ayoung: for example, the following call returns "unauthrized": curl -H "X-Auth-Token:$TEST_TOKEN" http://localhost:35357/v3/projects
14:07:10 <ayoung> ukalifon, which policy file is  being used?
14:07:14 <giulivo> psedlak, the config is fine
14:07:36 <psedlak> giulivo: thx, not sure if it's related to #rdo though ;)
14:07:53 <ukalifon> ayoung: however, with the same token I have no problems creating new projects and users. It's just the listing that is a problem. There is a new policy.json file in this release
14:07:57 <ayoung> ukalifon, I'm still getting set p here, but my guess is that the install has defaulted to using the 'cloud example" version of the policy fie,
14:07:59 <giulivo> the storageapi config I mean
14:08:20 <giulivo> no the RDO thing is to see if foreman manages to enable some service without digging much into the puppet recipes :)
14:09:33 <ayoung> ukalifon, take a look at the policy file, and see if there is a different role you need...I haven't looked at it in a while.
14:09:58 <ayoung> "identity:list_users": "rule:cloud_admin or rule:domain_admin",
14:10:11 <ayoung> same as    "identity:create_user": "rule:cloud_admin or rule:domain_admin",
14:10:25 <ayoung> is that what you have?
14:11:15 <ukalifon> ayoung: I looked over the file and noticed that it is vastly different from what I know, however it still looks like I should have the needed credentials. I have:
14:11:15 <ukalifon> "identity:list_users": "rule:admin_required",
14:26:50 <mflobo> kashyap, after 2 days, right now I solved my issue. The problem was that I didn't have installed python-routes and python-paste-deploy (not python-pastedeploy) libraries
14:27:44 <ayoung> ukalifon, what is the rule for creating users?
14:28:29 <ukalifon> ayoung:     "identity:create_user": "rule:admin_required"
14:28:57 <kashyap> mflobo, Nice.  It's always handy to do yum searches for package deps, etc .
14:29:12 <ayoung> and doing both operations with the same token gets you a 403?
14:29:46 * kashyap has to step out, see ya.
14:29:47 <ayoung> or a 401?
14:30:01 <ukalifon> creating a user succeeds
14:30:12 <ayoung> 401 means you need to authenticate,  403 means your token doesn't have the permissions
14:30:34 <ukalifon> ayoung: creating succeeds, only listing fails
14:30:46 <ukalifon> ayoung: with the same token
14:30:47 <mflobo> kashyap, yes, the trick was the difference between "python-paste-deploy" and "python-pastedeploy" (without dash)
14:30:51 <ayoung> what code does it get?
14:31:10 <mflobo> kashyap, thank you anyway ;)
14:31:27 <ukalifon> ayoung: let me check
14:32:23 <ayoung> ukalifon, I'd like to know if other people are seeing the same thing
14:33:03 <ukalifon> ayoung: using keystone client works, so I'm not sure if anyone else is trying to use the API like I am
14:33:29 <ayoung> ukalifon, wait, you can list users using the client, just not the API?
14:33:35 <ayoung> ah...but client is doing v2?
14:35:18 <ukalifon> ayoung: correct
14:35:32 <ayoung> ukalifon, ok, so the policy rule might be different
14:35:35 <kashyap> mflobo, A handy command for future reference:
14:35:38 <kashyap> $ repoquery --requires --recursive --resolve openstack-keystone
14:36:10 <ayoung> hmm.  Let me test on my system.  Its from git, but maybe...
14:36:17 <kashyap> mflobo, That'll give you all the dependencies needed for a specific package
14:36:46 <mflobo> kashyap, thanks a lot!
14:37:49 <mpavlase> panda: can you please send me a URL of that optional repo?
14:39:16 <ayoung> ukalifon, out of curiosity  v2 or v3 token?
14:39:40 <ukalifon> ayoung: the token is requested via V2.0 API
14:39:44 <ayoung> OK
14:39:49 <ayoung> just trying to reporduce
14:40:25 <ukalifon> ayoung: I just suceeded listing users, don't know what changed... trying also to list projects...
14:40:43 <ayoung> ukalifon, OK, I get success.  But that means nothing....
14:41:00 <ukalifon> ayoung: maybe Jeremy did something
14:41:08 <ayoung> jagee?
14:41:12 <ukalifon> yes
14:41:19 <ukalifon> he's checking on my system now
14:41:29 <ayoung> ukalifon, I tried listing projects.  Let me try users
14:41:42 <ayoung> Heh...yes
14:41:53 <ayoung> I had loaded up a slew of sample data...I should send to you
14:43:56 <ayoung> ukalifon, OK,  I'm going to get some breakfast etc.  Let me know what you find
14:44:06 <ukalifon> ayoung: thanks
15:05:13 <ohochman> jayg: any insights ?
15:06:01 <jayg> ohochman: still waiting for the last run from the service to stop - mind if I kill it?
15:06:16 <ohochman> jayg: got for it
15:06:36 <jayg> note I am on a call now (for about half hour) so I may be a little delayed in debugging/responding
15:07:58 <ohochman> jayg: : sure , I'll have to go soon . so I'll write it down here,  I had another issue that I wanted to ask you about... I got this strange error during run of foreman_client.sh -->
15:08:00 <ohochman> Error: Could not autoload puppet/type/firewall: no such file to load -- puppet/util/firewall
15:08:01 <ohochman> Error: Could not retrieve catalog from remote server: Could not intern from text/pson: Could not autoload puppet/type/firewall: no such file to load -- puppet/util/firewall
15:08:17 <ohochman> http://pastebin.com/xUZJJG5a
15:10:00 <jayg> ohochman: if you want to pm me the foreman server info, I can take a look after as well - that looks like the underlying firewall module is missing something though
15:10:31 <ohochman> jayg: OK,  and last thing..  is that I opened RDO Bz#1061613  for all the strange errors I encountered yesterday during the run of foreman_client.sh run  .
15:10:49 <jayg> ok, cool, good to track those
15:11:23 <ohochman> jayg: ok, thanks.
15:12:31 <giulivo> anyone has seen this "libvirtError: internal error: CPU feature `svm' specified more than once" ?
15:18:30 <ayoung> ukalifon http://admiyo.fedorapeople.org/openstack/keystone/sampledata/
15:18:39 <ayoung> should be something there you can use
15:19:15 <ayoung> kashyap, ^^ you too
15:31:17 <panda> anyone that can give me a hand with openflow ?
15:48:07 <ranjan> hi all, have a doubt on RDO installation with packstack
15:48:08 <ranjan> anybody here :)
15:53:06 <morazi> panda, what are you trying to do?
15:53:51 <panda> morazi: multinode w/ vxlan, instance start but doesn't obtain address via dhcp
15:54:17 <panda> morazi: looking at the flows on the br-tun bridge, I can see that all boradcast packets are dropped
15:55:26 <panda> so dhcp discover is dropped
15:56:06 <panda> never reaches the controller, but it is not vxlan related I think, it is at least one step before
15:57:00 <morazi> panda, hrm.  rkukura otherwiseguy beagles are usually my go-to folks for all things neutron.
15:57:32 <rook> panda: is iptables blocking?
15:57:34 <ranjan> hi , i have a RDO installation using packstack. the installation is with 1 controller and 3 compute nodes with neutron. now the problem is the vm's are not able to get metadata
15:58:47 <rook> panda: I have seen this - VXLAN doesn't seem to have a iptables rule (though this could be old information)
15:59:30 <panda> rook: I don't think it is related to iptables, unless iptables drives flow information on openvswitch too
16:00:17 <rook> panda: okay... not sure I understand what you mean. iptables doesn't do "flows" (natting, but not flows).
16:00:26 <rook> panda: however it will throw away the traffic.
16:00:47 <rook> iptables -nvL
16:00:58 <rook> monitor which rule increments when you attempt to get a DHCP address
16:02:41 <morazi> ranjan, I'd suggest starting here:  http://openstack.redhat.com/Networking
16:03:22 <morazi> ranjan, there are a number of pretty network savvy folks on the channel that might be able to provide more guided help but I think we'd need to understand what you have set up/how you went about it with packstack
16:03:38 <ranjan> okie
16:03:47 <rook> panda: ovs-dpctl show  -- do you see the VXLAN port?
16:05:27 <panda> rook: yes, but following the flows in ovs-ofctl dump-flows br-tun, I can see all broadcast packets are dropped
16:06:03 <panda> rook: so, I can see the discover request with a tcpdump on the vm tap interface
16:06:09 <panda> I can see it on br-int
16:06:13 <panda> but not on br-tun
16:06:16 <rook> please post your ovs-ofctl dump-flows br-tiun
16:06:19 <panda> becaus it is dropped
16:06:20 <rook> tun*
16:06:42 <rook> *if you have already - sorry
16:07:44 <panda> rook: this is on the separate compute node: http://paste.openstack.org/show/62606/
16:08:20 <rook> panda: yes - there is a bigger issue here
16:08:26 <rook> table=21 should have more flows.
16:08:44 <rook> panda: so you created a network, and launched a guest on this compute node?
16:08:54 <panda> yes
16:09:03 <rook> panda: do you see any errors in the plugin log
16:11:13 <panda> rook: /var/log/neutron/openvswitch-agent.log ? not recent errors
16:11:45 <rook> panda: ML2 ?
16:12:16 <rook> panda: are you using ML2
16:12:31 <panda> rook: no, openvswitch directly
16:12:40 <rook> panda: can you share your plugin.ini file
16:14:35 <panda> rook: delete commented and empty lines http://paste.openstack.org/show/62607/
16:17:53 <rook> panda: that looks good
16:18:11 <rook> when you switched to vxlan did you re-create the networks/guests/
16:18:14 <rook> guests?
16:19:05 <panda> rook: there where none, I switched immediately after packstack installation
16:20:12 <rook> panda: what does provider:network_type show?
16:20:23 <rook> neutron net-show <network name>
16:20:30 <panda> rook: yes, was looking at it .. it says gre :(
16:20:41 <rook> panda: yup
16:20:50 <rook> panda: remove all networks and guests
16:20:57 <rook> restart all services
16:21:01 <rook> create networks and guests
16:21:13 <rook> panda: let me know if that gets you in better shape
16:22:00 <panda> rook: ok, just stay there :)
16:22:11 <rook> panda: lol, i should be around
16:22:15 <rook> panda: meetings all day
17:13:15 <panda> rook: things are getting better, now I have another rule on table 21, but i think I messed up my initial network configuration, to make it quick :/
17:14:25 <panda> rook: I'll clean it up a bit, but still, I don't understand why, even with the correct flow rul I cannot see any traffin with a tcpdump -ni br-tun
17:16:54 <rook> panda: bbl, in meeting
17:29:57 <panda> ok, broadcast storms on openvswitch bridges .... check.
17:38:21 <verdurin> I'd like to test AIO on CentOS 6.5, just with internal connectivity
17:38:38 <verdurin> Are any special considerations needed for that case?
17:54:06 <rook> panda: in good shape?
17:58:49 <panda> rook: worse than ever, I know I should have probably restarted from scratch by now but 1) I  don't learn anything, 2) I can't believe things gets so messed up that only a reinstall can fix them ..
17:59:50 <verdurin> Just tried AIO on CentOS 6.5 and there's a Ceilometer error caused by MongoDB connection failure
18:00:14 <panda> verdurin: are you using packstack ?
18:00:36 <rook> ttracy_wfh: ?
18:00:41 <rook> panda: ?
18:00:43 <kashyap> verdurin, Useful to post your versions of Ceilometer, MongoDB
18:01:03 <rook> panda: cannot launch guests? or they don't work at all?
18:02:19 <panda> rook: network configuration, I changed the ip of "internal" interfaces, and cant' get the bridges to set up properly .. for example, patch-tun keeps being moved under br-int ... :/
18:02:33 <verdurin> panda: yes
18:03:44 <verdurin> kashyap: Ceilometer is 2014.1-0.3.b2, MongoDB is 2.4.6-1
18:05:55 <verdurin> Packstack version is 2013.2.1-0.29.dev956
18:23:57 <verdurin> kashyap: just re-ran it with the same answer file and there was no error...
19:19:47 <DG_> DCG
19:19:55 <DG_> Hi
19:20:36 <DG_> I am trying to bring up a VM ontop of redhat. But its failing...
19:24:37 <morazi> verdurin, hrm that sounds vaguely familiar
19:25:50 <DG_> VM installation says "install exited abnormally". I am using a local redhat iso to bring it up
19:58:58 <kashyap> verdurin, Cool, it's safe/recommended to re-use the same answers file (as near as I know. I'm not much of a packstack user)
20:50:51 <Pursuit[LT]> Having a spot of trouble with Cinder. I'm using an NFS backend and can create volumes without trouble, but when I attempt to attach a volume to an instance, it fails silently. Can't find an error in the cinder logs. I've noticed that /etc/cinder/volumes is empty. Any ideas?
20:52:12 <Pursuit[LT]> well, never mind. a reboot seemed to fix it
20:52:44 <eharney> Pursuit[LT]: well... that's good. :)   generally an attach error like that will show up in the Nova compute log
02:23:46 <kitp> heyo. i'm trying to add a dns server to a subnet with neutron.  first is it possible to update the subnet with a new dns server.  and if so, what's the neutron command look like?
02:24:19 <kitp> neutron subnet-update 00cf5145-cd44-4f29-906f-00af3eefd046 --dns_nameservers=10.0.1.1
02:24:20 <kitp> Invalid input for dns_nameservers. Reason: Invalid data format for nameserver: '10.0.1.1'.
02:24:35 <kitp> any ideas?
02:46:17 <kitp> need list=true before the dns server's ip.
05:33:11 * kashyap is going to stop Zodbot instance that we started just before the test days started
05:33:13 <kashyap> #endmeeting