Wednesday, 2019-12-11

*** Bunio_FH <Bunio_FH!> has quit IRC00:02
*** ericch <ericch!> has quit IRC00:12
*** rburton <rburton!> has quit IRC00:29
*** bluca <bluca!~bluca@> has quit IRC00:54
yoctiNew news from stackoverflow: Is it possible to run the rootfs containing kernel and DTB files for a specific board using qemu <>01:58
*** jklare <jklare!~jklare@> has quit IRC02:58
*** gnac <gnac!> has quit IRC02:59
*** jklare <jklare!~jklare@> has joined #yocto02:59
*** khem <khem!~khem@unaffiliated/khem> has quit IRC03:10
*** khem <khem!~khem@unaffiliated/khem> has joined #yocto03:22
*** saraf <saraf!~a_saraf@2405:204:911e:e7a7:d1ed:3b0b:a84f:8305> has joined #yocto05:19
*** saraf <saraf!~a_saraf@2405:204:911e:e7a7:d1ed:3b0b:a84f:8305> has quit IRC05:33
*** iceaway <iceaway!~pelle@> has joined #yocto05:48
*** jklare <jklare!~jklare@> has quit IRC05:59
*** jklare <jklare!~jklare@> has joined #yocto06:02
*** AndersD <AndersD!> has joined #yocto06:22
*** clopez <clopez!> has quit IRC06:34
*** edgar444 <edgar444!uid214381@gateway/web/> has quit IRC06:35
*** opennandra <opennandra!> has joined #yocto06:57
*** jklare <jklare!~jklare@> has quit IRC06:58
*** jklare <jklare!~jklare@> has joined #yocto07:00
*** guerinoni <guerinoni!> has joined #yocto07:01
*** pohly <pohly!> has joined #yocto07:10
*** leon-anavi <leon-anavi!~Leon@> has joined #yocto07:11
*** agust <agust!> has joined #yocto07:12
tlwoernerwhen i send an email to the list i see: "Trevor Woerner via Lists.Yoctoproject.Org" as the sender (instead of just, say, "Trevor Woerner". is that how everyone sees their own messages? or is my email not configured correctly?07:26
tlwoernerhalstead: ^^07:26
halsteadtlwoerner: everyone will see their own messages with the from line rewritten.07:27
tlwoernerhalstead: ok, thanks :-)07:28
halsteadDepending on the DMARC setting for the domain you'll see more addresses rewritten.07:28
*** ka6sox is now known as zz_ka6sox07:29
LetoThe2ndtlwoerner: gm!07:33
LetoThe2ndtlwoerner: are you still around for a second?07:34
*** frsc <frsc!> has joined #yocto07:39
*** locutus_ <locutus_!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto07:55
*** TobSnyder <TobSnyder!> has joined #yocto07:57
*** jklare <jklare!~jklare@> has quit IRC07:58
yoctiNew news from stackoverflow: Do_compile error for libpam package: Angstrom build <>07:59
*** farnerup <farnerup!> has joined #yocto08:00
*** hpsy <hpsy!~hpsy@> has joined #yocto08:01
*** florian_kc <florian_kc!~florian_k@Maemo/community/contributor/florian> has joined #yocto08:01
*** jklare <jklare!~jklare@> has joined #yocto08:02
*** zz_ka6sox is now known as ka6sox08:05
*** mckoan|away is now known as mckoan08:05
mckoangood morning08:05
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto08:06
*** locutus_ <locutus_!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC08:06
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC08:17
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto08:19
*** yann <yann!> has quit IRC08:22
*** florian_kc <florian_kc!~florian_k@Maemo/community/contributor/florian> has quit IRC08:24
*** rob_w <rob_w!~bob@unaffiliated/rob-w/x-1112029> has joined #yocto08:37
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC08:46
*** jklare <jklare!~jklare@> has quit IRC08:58
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto09:00
*** jklare <jklare!~jklare@> has joined #yocto09:01
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC09:06
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto09:06
*** locutus_ <locutus_!~LocutusOf@ubuntu/member/locutusofborg> has joined #yocto09:10
*** locutus_ <locutus_!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC09:11
*** LocutusOfBorg <LocutusOfBorg!~LocutusOf@ubuntu/member/locutusofborg> has quit IRC09:11
*** goliath <goliath!> has joined #yocto09:13
*** yann <yann!~yann@> has joined #yocto09:22
*** Bunio_FH <Bunio_FH!> has joined #yocto09:24
*** florian_kc <florian_kc!~florian_k@Maemo/community/contributor/florian> has joined #yocto09:30
*** florian_kc is now known as florian09:32
*** farnerup <farnerup!> has quit IRC09:36
*** ka6sox is now known as zz_ka6sox09:37
*** grumble <grumble!~grumble@freenode/staff/grumble> has quit IRC09:38
*** grumble <grumble!~grumble@freenode/staff/grumble> has joined #yocto09:40
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has quit IRC09:40
*** zz_ka6sox is now known as ka6sox09:41
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has joined #yocto09:43
*** jklare <jklare!~jklare@> has quit IRC10:00
*** jklare <jklare!~jklare@> has joined #yocto10:03
RPok, lets try 3.1 M1 rc5 :/10:11
*** clopez <clopez!> has joined #yocto10:15
*** bluca <bluca!~bluca@> has joined #yocto10:24
*** T_UNIX <T_UNIX!uid218288@gateway/web/> has joined #yocto10:26
*** falk0n <falk0n!> has joined #yocto10:27
yoctiNew news from stackoverflow: Listing SRC_URI of all packages/files needed to build Yocto image <>10:29
LetoThe2ndRP: if you leave out one specific word, then the mail is pretty funny: "How about extending uninative with more bits?" :)10:44
*** ThomasD13 <ThomasD13!> has joined #yocto10:55
*** jklare <jklare!~jklare@> has quit IRC10:57
*** TobSnyder <TobSnyder!> has quit IRC10:58
*** JaMa <JaMa!~martin@> has joined #yocto10:58
*** TobSnyder <TobSnyder!> has joined #yocto10:58
*** jklare <jklare!~jklare@> has joined #yocto10:59
*** guerinoni <guerinoni!> has quit IRC11:04
*** farnerup <farnerup!> has joined #yocto11:09
*** farnerup <farnerup!> has quit IRC11:32
*** goliath <goliath!> has quit IRC11:33
JaMaRP: with the sstate changes from yesterday I'm seeing: Exception: bb.process.ExecutionError: Execution of '/OE/build/oe-core/tmp-glibc/work/x86_64-linux/quilt-native/0.66-r0/temp/run.sstate_create_package.25465' failed with exit code 1:11:35
JaMamktemp: failed to create file via template ‘/OE/build/oe-core/sstate-cache/universal/5b/sstate:quilt-native:x86_64-linux:0.66:r0:x86_64:3:5b5ddc2dc26dbbdaa89060f194e5a6b56a048a7a6ad3b71bd290bd528355e70d_populate_sysroot.tgz.XXXXXXXX’: No such file or directory11:35
JaMain clean build, any idea what went wrong?11:36
JaMaonly .siginfo files were created in sstate-cache11:37
*** berton <berton!~berton@> has joined #yocto11:38
*** berton <berton!~berton@> has quit IRC11:40
*** berton <berton!~berton@> has joined #yocto11:43
RPJaMa: fix is in master I think11:51
RPJaMa: I messed up the last fix :(11:51
*** farnerup <farnerup!> has joined #yocto11:54
*** leon-anavi <leon-anavi!~Leon@> has quit IRC11:54
JaMaRP: I do have this one
RPJaMa: and its still breaking? :(11:55
RPJaMa: the patch is messed up, again. The mkdir needs to be about 8 lines higher :(11:56
JaMayes, here is the trace if you can spot the code path leading to this
JaMaok, let me check11:57
RPJaMa: sorry, these changes were meant to be straight forward small tweaks and have turned into a disaster ;(11:57
*** jklare <jklare!~jklare@> has quit IRC11:57
JaMaah right, above mktemp11:58
JaMastrange that it's failing only on some hosts it seems11:58
*** goliath <goliath!> has joined #yocto11:59
RPJaMa: depends how many of the sstate dirs are present?12:00
RPJaMa: I've put a fix in master12:00
JaMaboth builds were clean without sstate-cache12:00
*** jklare <jklare!~jklare@> has joined #yocto12:00
JaMaand mktemp fails without existing directory on the builder where the build itself doesn't fail as well12:01
JaMamight be that only my local build where it was failing has hashequiv enabled12:01
* RP stops rc5 and goes to rc6 12:02
LetoThe2nddisaster? i hear disaster?12:08
*** stuom1 <stuom1!3eecd81d@> has joined #yocto12:14
JaMaRP: btw: that mkdir is using spaces for indentation and other lines around are tabs12:16
JaMawhitespace fix
stuom1anyone have experience making recipes for npm projects?12:21
LetoThe2ndstuom1: many have, and it universally considered a pain in the .... not only there, about everywhere.12:24
stuom1in my recipe (made with devtool) I have PV = "0.1.0+git${SRCPV}" but I get npm error for not finding file '/path/to/project-0.1.0+git999.tgz' because it is called '/path/to/project-0.1.0.tgz'. Where is that 999 coming to SRCPV and how to get it match12:25
stuom1LetoThe2nd oh good, im not alone12:25
*** u1106_ is now known as u110612:26
*** opennandra <opennandra!> has quit IRC12:27
stuom1I get it to build, if I remove the +git${SRCPV} part completely, but it builds only ONCE, next try fails with completely new problems, so I try to tackle them one at a time12:27
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has quit IRC12:31
RPJaMa: ahhrg, right :/12:33
stuom1with PV = "0.1.0" I can build it once using "devtool build" but if I try second time without any changes, or try to bitbake an image, I get "npm ERR! Cannot read property 'replace' of null"12:34
*** leon-anavi <leon-anavi!~Leon@> has joined #yocto12:37
tlwoernerLetoThe2nd: hey! it was off to bed seconds after i wrote that; i didn't even see the DMARC reply :-)12:42
LetoThe2ndtlwoerner: hehe12:43
LetoThe2ndtlwoerner: hope you had a good night then!12:44
tlwoernerLetoThe2nd: it was a little shorter than i would have liked; i'll probably need a nap this afternoon ;-)12:45
LetoThe2ndtlwoerner: happends, that! just had one remark, and one idea/question.12:46
LetoThe2ndtlwoerner: remark: once you find (or even create) a good outline on the topic you put on the ML, i'd be glad to see it!12:47
qschulzdoes anyone know how to increase loglevel or verbosity for qemu-wrapper? I'm trying to give more information on that gobject-introspection + icecc bug on >= warrior and khem asked for more info12:50
tlwoernerLetoThe2nd: sounds good. it's not enough to simply read a recipe top-to-bottom, one has to consider *when* various parts (e.g. tasks) will run. it's like reading VHDL or verilog: the order doesn't matter!12:51
*** rob_w <rob_w!~bob@unaffiliated/rob-w/x-1112029> has quit IRC12:52
*** vmeson <vmeson!> has quit IRC12:52
LetoThe2ndtlwoerner: the other is, like an idea. what do you think about preparing a bootable thumb drive that brings a fully prepared yocto/oe setup for a specific board, including hot sstate and all?12:53
LetoThe2ndi've been looking into ubuntu-based stuff here, but no real clue yet.12:54
LetoThe2ndi mean, it would be rather huge, hard to make it downloadable - but not exactly unheard of, but drives are cheap and shipping them trough mail is easy.12:56
*** jklare <jklare!~jklare@> has quit IRC12:59
tlwoernersneakernet! or carrier pigeon13:00
*** goliath <goliath!> has quit IRC13:02
tlwoernera long time ago ndec and myself were giving a tutorial at a Linaro Connect (in Dublin, if I remember correctly?) and we brought a thumb drive with a hot cache, sstate-cache, tmp etc. the only *gotcha* is that the host distro version has to match for the sstate-cache to work13:03
tlwoerneri think this was in the days before the amazing build accounts halstead sets up for devdays13:04
*** jklare <jklare!~jklare@> has joined #yocto13:04
tlwoernerwe handed out the drives and people copied them to their machines before starting the tutorial13:04
tlwoernerin sstate-cache is a host-distro-specific directory, if the host distro version doesn't match, the hot sstate is ignored and the machine ends up building everything from scratch13:05
tlwoernerbut you can take the same sstate-cache directory to several hosts and the directory will contain multiple host-distro-specific directories13:06
LetoThe2ndtlwoerner: eactly, there's still a couple of gotches.13:06
erboYou could also maybe do builds in a docker container, to make sure "host distro" is the same regardless of the users distro choice.13:08
*** stuom1 <stuom1!3eecd81d@> has quit IRC13:09
LetoThe2nderbo: and ship a 50GB docker container?13:10
LetoThe2ndi'm rather envisioning a setup that boots, and has the absolutely highest chances of supporting one target instantly. without downloads, without big rebuilds.13:11
*** stuom1 <stuom1!3eecd81d@> has joined #yocto13:13
JaMatlwoerner: uninative helps with that, doesn't it?13:13
tlwoernerLetoThe2nd: ooh, i hadn't thought of making it bootable. that would solve the "host distro version" matching thing. but would mean you'd need drives for everyone in the class (which isn't too onerous)13:14
LetoThe2ndtlwoerner: i'm not thinking about classes.13:14
tlwoernerJaMa: isn't uninative about having the same host (native) tools? uninative doesn't setup sstate does it?13:15
erboLetoThe2nd: I usually bind mount the sstate-dirs, downloads, meta-data and tmp-dir etc into the container. So that the container doesn't ever change, it's basically just a poky-container with a few extra packages for conveniece.13:16
erboBut sure, a bootable USB solves it too13:17
LetoThe2nderbo: yeah. we do that too, and its all nice and such for people who know their way.13:17
*** rburton <rburton!> has joined #yocto13:18
LetoThe2ndi want something that i can give to somebody who literally has *no* *clue*13:18
erboYeah then adding a layer of docker might not be the best thing to do :)13:18
LetoThe2ndanything that requires manual interaction/setup is too much13:19
*** tgamblin <tgamblin!~tgamblin@> has joined #yocto13:21
*** georgem <georgem!~georgem@> has joined #yocto13:24
LetoThe2ndtlwoerner: yeah13:30
*** vmeson <vmeson!~rmacleod@> has joined #yocto13:32
*** BCMM <BCMM!~BCMM@unaffiliated/bcmm> has joined #yocto13:35
*** PaowZ_ <PaowZ_!~Vince@> has joined #yocto13:44
*** goliath <goliath!> has joined #yocto13:44
PaowZ_Hello there! I'm trying out YP through the use of Build Appliance.. I then ran a "bitbake core-image-minimal" as a simple test and ran into an error while building /home/builder/poky/meta/recipes-devtools/gcc/ well, it seems it does not build out of the box.. where should I look into, first ?13:47
LetoThe2ndbuild appliace?13:47
PaowZ_LetoThe2nd: yes, Build Appliance - Zeus 3.013:49
LetoThe2ndPaowZ_: can you pointme to your source?13:50
PaowZ_The Build Appliance is a virtual machine which enables you to build and boot a custom embedded Linux image with the Yocto Project using a non-Linux development system13:50
PaowZ_I just ran that image into VirtualBox13:51
LetoThe2ndyou see me surprised that the build appliance even is seeing new releases13:52
PaowZ_new package releases, you mean ?13:52
LetoThe2ndnew yocto releases, i mean13:53
RPPaowZ_: what was the error?13:53
*** abhiarora44 <abhiarora44!uid396576@gateway/web/> has joined #yocto13:55
RPPaowZ_: if you "bitbake gcc-cross-x86_64 -c clean" and rerun the build do you get the same error?13:56
PaowZ_by "rerun the build", there is a specific command or I just "bitbake core-image..." ?13:57
*** jklare <jklare!~jklare@> has quit IRC13:58
PaowZ_I'm cleaning the way you advise, though..13:58
RPPaowZ_: just bitbake core-image... again13:58
PaowZ_ok.. I keep you posted.. thanks..13:59
RPPaowZ_: its unusual for gcc-cross to break that like. The times I've seen a VM do this before were often memory issues on the system :(13:59
RPPaowZ_: if it does exactly the same thing again its probably not memory, if it breaks randomly somewhere else each time, it probably is14:00
RPPaowZ_: but its the best place to start in figuring out what is wrong...14:00
PaowZ_interesting point, RP.. here the settings for the VM: 4cores, 2048MB for ram and 54GB for disk..14:01
*** jklare <jklare!~jklare@> has joined #yocto14:01
PaowZ_aside, I have IO-APIC enabled and PAE disabled (don't need it, I think..)14:02
LetoThe2nd2048mb ram is a serious bummer14:02
PaowZ_too short ?14:02
PaowZ_ok.. well.. I gonna change the settings for upper amount..14:03
PaowZ_I keep you posted, guys, thanks for the responsiveness :)14:03
LetoThe2ndit hopefully shouldn't hit that hard, but "GB including the whole building host system is really, really tiny :P14:04
PaowZ_2GB is really short in my guess, when it comes to build toolchain, indeed.. I should have thought about it..14:05
*** stuom1 <stuom1!3eecd81d@> has quit IRC14:07
RPPaowZ_: I'm more worried about whether there is an issue with the underlying memory hardware or the virtualisation mechanism. We've seen both give errors a bit like that...14:07
RPThe virt issue was many years ago when it was much newer technology14:08
PaowZ_this used to be true, for virtualization mechanism, back then.. but now it tends to be quite reliable14:10
RPPaowZ_: see how the build goes, that should give a hint14:10
PaowZ_yup.. it's currently ongoing..14:11
PaowZ_thanks to the cache..14:11
RPPaowZ_: has it gotten past gcc-cross?14:12
PaowZ_RP: currently being built :)14:22
*** BCMM <BCMM!~BCMM@unaffiliated/bcmm> has quit IRC14:23
JaMatlwoerner: maybe I misunderstood what you meant with "sstate-cache is a host-distro-specific directory, if the host distro version doesn't match, the hot sstate is ignored" but with uninative the ssate isn't specific to host distro and the path to sstate-cache or sstate mirror isn't included in the signatures, so it should be used just fine (as long as the metadata are matching)14:28
tlwoernerJaMa: ooh, i didn't know that! interesting14:29
JaMatlwoerner: without uninative all native sstate will be in host distro specific LSB directory inside sstate-cache and indeed not reused in different distros and because target recipes now include native signatures, it will make whole sstate-cache host-distro specific14:30
tlwoerneryes, that's what I've seen. i got around that by mounting the same sstate directory into different hosts and running each build on each host but with one sstate. it inflated the sstate, but then ensured the cache was "hot" for anyone using it on their machine14:32
tlwoernerbut your suggestion would be much better. noted! thanks :-)14:32
PaowZ_RP: do_compile ok for gcc-cross-x8614:35
*** andycooper <andycooper!uid246432@gateway/web/> has joined #yocto14:39
RPPaowZ_: I'd probably run memtest on the hardware that VM is running on14:41
RPPaowZ_: also possible it was some kind of parallel make race but less likely14:41
qschulzI'm wondering, what's the rationale behind having MACHINEOVERRIDES loosely set to MACHINE, forcing us to use prepends. We're leveraging this prepend stuff in some of our includes as well, but I discovered today that we should put those MACINESOVERRIDES =. *before* all `require` prepending to  [e=13]pike [r=14]NickServ [t=15]#brootlin [y=16]alex [u=17]#guy [i=18]NamNam [o=19]atenart [20]paulk-leonov14:42
qschulzI'm wondering, what's the rationale behind having MACHINEOVERRIDES loosely set to MACHINE, forcing us to use prepends. We're leveraging this prepend stuff in some of our includes as well, but I discovered today that we should put those MACHINESOVERRIDES =. *before* all `require` prepending to the said variable14:42
PaowZ_RP: Using "top" command I could estimate how heavy the threads build are.. It really needed 8GB at least..14:42
RPqschulz: MACHINEOVERRIDES is what is added to OVERRIDES, not MACHINE14:43
RPPaowZ_: right, but that should have given an OOM error, not an assembler failure14:43
RPqschulz: exactly, so what is your question?14:46
RPit has a default, a machine can override it to what it needs if the default doesn't work?14:47
RPqschulz: I guess the answer you probably are looking for is some machines may want to override the whole thing and not have MACHINE in there14:48
qschulzRP: In my mind, one would absolutely want MACHINE to be the rightmost part of MACHINEOVERRIDES14:52
PaowZ_RP: "god knows.." ;)14:52
RPqschulz: not if you're making some clone of a machine to change one setting and want it to otherwise behave as the original14:53
RPqschulz: we even test that14:53
qschulzRP: we require the original machine conf file for that14:53
qschulzand MACHINEOVERRIDES the name of the required machine conf file14:53
qschulzwhich means we need this MACHINEOVERRIDES before the require14:54
qschulzI'm not saying it does not work. It just feels weird to me for this particular variable to "be forced" to have it prepended before doing requires14:55
RPqschulz: well, I'm saying that people do use it like I describe and its very hard to undo an append/prepend so this is why its like this14:55
RPqschulz: I suspect its expected that most machines would just force a new value and include MACHINE14:56
RPthen no prepending required14:56
*** jklare <jklare!~jklare@> has quit IRC14:56
*** roussinm <roussinm!> has joined #yocto14:59
qschulzRP: that means that we need to force MACHINEOVERRIDES for each similar machine, error-prone if you have many different includes and MACHINEOVERRIDES.14:59
RPqschulz: so build it carefully with prepends and appends15:00
*** jklare <jklare!~jklare@> has joined #yocto15:00
qschulzRP: Is there any known use case for not wanting MACHINE in MACHINEOVERRIDES?15:00
RPqschulz: yes, I described it above15:00
*** kanavin__ <kanavin__!~kanavin@> has quit IRC15:00
*** kanavin__ <kanavin__!~kanavin@> has joined #yocto15:00
RPqschulz: I create a machine called "qemux86copy", I don't want "qemux86copy" in overrides15:00
RPqschulz: mentioned in meta/lib/oeqa/selftest/cases/ and used by several people as a way of testing various pieces of the system15:01
qschulzRP: if we don;t leverage qemux86copy anywhere, it does not hurt to have it MACHINEOVERRIDES15:01
qschulzRP: ok, will read. Thanks ,anyway was just being curious :)15:02
RPqschulz: extra overrides slow down the datastore and have a habit of being dangerous IMO15:02
* RP is suitably scared of overrides but I know how the code works :(15:02
RPqschulz: you are correct in that it could have been done differently however it was done this way and it has pros/cons15:03
qschulzwell, I'm scared too now (again) because I thought we already fixed our disordered MACHINEOVERRIDES but no :) forgot this one15:03
*** ThomasD13 <ThomasD13!> has quit IRC15:04
*** AndersD <AndersD!> has quit IRC15:05
qschulzRP: I just wanted to hear if there was a very specific use case which warranted this behavior or if it just turned out to be the chosen implementation. I.e. I wanted to know if I should say to people in my company "it is how it is" or "it is like that because of..."15:05
*** sstabellini <sstabellini!sstabellin@gateway/shell/xshellz/x-wwwzglyzdetzrhzc> has joined #yocto15:06
qschulzyour qemu example looks like a very specific use case so I'll read that file. Thanks for having taken the time to answer me, much appreciated15:06
RPqschulz: the principle use case is to allow overriding when MACHINE isn't what you want in overrides15:06
RPchanging an append/prepend is near impossible so that is why we don't use them here15:06
qschulzRP: understood, I was failing to see the "when MACHINE isn't what you want in overrides" use case15:06
qschulzRP: yeah and we don't want to use _remove as much as possible15:07
RPqschulz: I'd prefer never ;-)15:07
RPqschulz: using _remove in the middle of overrides construction would also be asking for trouble/performance problems15:09
RPqschulz: sorry, I think I'm feeling jaded today :/15:12
* RP has an inbox of email about how performance sucks and we have all these other problems :/15:13
LetoThe2ndRP: pipe to /dev/null and let people screw themselves :)15:13
RPLetoThe2nd: perhaps you could take care of bug triage this week :)15:14
LetoThe2ndRP: i can declare it as a yocto bof?15:15
RPLetoThe2nd: not really :)15:23
LetoThe2ndRP: no deal then.15:23
JPEWRP: Here's my PoC on how I think the AB hash equiv failure could be fixed: I don't think it's doing the "uncover" in the correct location, and it royal screws up the build stats reporting (but I have an idea for that).15:25
*** goliath <goliath!> has quit IRC15:29
*** BCMM <BCMM!~BCMM@unaffiliated/bcmm> has joined #yocto15:29
RPJPEW: Allowing multiple "normal" task exection?15:30
RPJPEW: that breaks a ton of assumptions all over the system :(15:30
JPEWRP: Ya, basically, but only when they were previously covered by setscene15:31
JPEWRP: e.g. were previously skipped because they were covered by setscene15:31
RPJPEW: ah, right. I thought the code could already handle this, we just disabled it due to problems?15:33
JPEWRP: Maybe... but that might of been before we had the "force tasks to be equivalent"15:33
JPEWRP: Without that, stuff was reexecuting all the time15:34
JPEWI think we need it back for the specific (and rare) case where the server can't give us the hash we want15:34
RPJPEW: I don't like it, we disabled this for a reason :/15:36
*** ericch <ericch!> has joined #yocto15:43
*** farnerup <farnerup!> has quit IRC15:48
*** jklare <jklare!~jklare@> has quit IRC15:57
*** jklare <jklare!~jklare@> has joined #yocto16:00
*** leon-anavi <leon-anavi!~Leon@> has quit IRC16:04
JPEWRP: Hmm.... Ok. I'll keep thinking about it.16:08
RPJPEW: sorry, not trying to be negative, I appreciate the thought. I just worry about the problems we had with this previously. Perhaps it would be ok in the rare case this happens...16:12
RPJPEW: I'll ponder it some more too16:12
JPEWRP: It's fine :)16:13
* RP wonders why buildperf is breaking but only for release builds16:13
* RP suspects need to file another bug16:13
RPreproducibile builds failed the selftests for 3 out of 4 in the release build too :(16:14
*** yann <yann!~yann@> has quit IRC16:16
*** yann <yann!~yann@> has joined #yocto16:16
*** orzen <orzen!> has quit IRC16:17
JPEWRP: Do you have a log on that one?16:17
armpitRP, the buildperf systems are Union16:19
armpitRP, are the failures saving perf results?16:19
* armpit should just go look16:20
RPJPEW: selftests from
RPJPEW: will be the perl issue I expect16:21
RParmpit: /home/pokybuild/yocto-worker/buildperf-ubuntu1604/yocto-autobuilder-helper/scripts/upload-error-reports: line 11: cd: /home/pokybuild/yocto-worker/buildperf-ubuntu1604/build/build/../: No such file or directory16:21
RPwhatever that means16:21
armpittwo build dirs ??16:21
RParmpit: normal16:22
RParmpit: one is buildbot, one is ours16:22
armpitthere have been a range of issue. your master-next build where lookin for /master16:22
armpitI opened a bug on that one16:23
RParmpit: master-next would look for master to compare against16:23
armpitit was looking for /master dir to save16:23
JPEWRP: One of them looks like libgcc, I don't think thats one of the usual perl ones16:24
RPJPEW: this is the problem with failures, that mask things16:24
JPEWRP: Ya :(16:26
* JPEW really needs to get the diffoscope reporting into the test result16:26
*** TobSnyder <TobSnyder!> has quit IRC16:26
JaMashould unzip be in HOSTTOOLS? (there is gunzip already), today I've started seeing do_unpack failures "/bin/sh: 1: unzip: not found16:30
*** bernardoaraujo <bernardoaraujo!uid179602@gateway/web/> has quit IRC16:31
*** orzen <orzen!> has joined #yocto16:32
*** yann <yann!~yann@> has quit IRC16:33
*** Bunio_FH <Bunio_FH!> has quit IRC16:34
RPJaMa: doesn't the fetcher code add an unzip-native depends (or zip-native or whatever?)16:40
RPJaMa: gzip is common for sources and our bootstrap, zip, less so on linux16:40
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has quit IRC16:41
RPJaMa: yes, it does in base.bbclass16:42
*** goliath <goliath!> has joined #yocto16:50
*** hpsy <hpsy!~hpsy@> has quit IRC16:51
*** vineela <vineela!~vtummala@> has joined #yocto16:52
kergothAnyone know offhand if oe.package/package_manager/sdk/rootfs provide methods to produce the list of files for an installed package (ideally, all installed packages) in a rootfs/sysroot/sdk?16:56
kergothlooking to provide a recipe/files mapping in a sysroot in a non-package-manager-specific way16:57
*** jklare <jklare!~jklare@> has quit IRC17:00
millonihow is TARGET_ARCH set? i cant find the history for it in `bitbake -e`17:02
*** jklare <jklare!~jklare@> has joined #yocto17:03
*** marble_visions_ <marble_visions_!~user@> has quit IRC17:06
*** marble_visions <marble_visions!~user@> has joined #yocto17:07
*** ka6sox is now known as zz_ka6sox17:07
marble_visionshi all, i want to support two versions of the kernel, built from the same source, dev and rel17:10
marble_visionsi am wondering what's the best approach to this, without involving yocto-kernel-cache17:11
marble_visionsone thing that pops up in the manuals is to have config fragments that enable features... so with this scheme i'd have a stripped down rel kernel, with features adding the dev stuff on top17:12
millonimarble_visions: do i understand correctly that you have your own kernel tree, that is you dont want/cannot use the default yocto one17:12
marble_visionsmilloni: the kernel comes from a semi vendor, namely nxp, via the meta-fsl-bsp-* layers17:12
marble_visionsand i'm not sure how much of the upstream yocto kernel dev practices they follow, i need to doublecheck17:13
milloniand you want to continue using their recipe, pointing to the same git commit, just use a different config?17:13
marble_visionsexactly, different configs*17:13
marble_visionsand i guess i would select the configs on the image-level17:14
*** jonmason <jonmason!sid36602@gateway/web/> has quit IRC17:14
marble_visionsalthough i'd like all of them to be build at the same time.. since there won't be any complex dependencies between kernels and userspace17:14
*** jonmason <jonmason!sid36602@gateway/web/> has joined #yocto17:14
millonii suppose you dont want to keep two different configs17:14
millonibecause you want to avoid having an overlap between (and then having to be careful that they dont become out of sync)?17:15
marble_visionsactually i'd like to17:15
marble_visionsoh yeah..17:15
marble_visionswhat's the alternative? kernel feature selection?17:15
millonithere are config "fragments"17:16
milloniso you would have your defconfig for your rel build17:16
milloniand then assume for your dev build you want to enable additional options17:17
milloniyou would put those options in a config fragment17:17
marble_visionsyep, that was the cleanest i could figure out17:17
millonibut first, you want to be able to build rel and dev builds17:18
milloniit seems to me that the best way to implement this is this17:18
milloniyou'd create two distro configs17:19
millonione for rel the other dev17:19
milloniand that's going to tweak the options you want for your rel/dev images17:19
marble_visionsso it would be one distro, two images?17:20
millonitwo distros17:20
millonior actually, i suppose a two machine configs would make more sense17:20
*** WillMiles <WillMiles!> has joined #yocto17:21
milloniso, do you understand the distinction between distro, machine, and image?17:21
marble_visionsi know they're distinct, but i don't know the details17:21
marble_visionsif it's in the manuals i will find it17:21
marble_visionsbut if there is any arcane knowledge please share :)17:22
milloniit's not really arcane knowledge, it's just a bit fuzzy...17:22
millonihopefully someone will correct me if im wrong, but here's my understanding17:22
millonifirst, image is defined by your image recipe, and it's just a regular recipe17:23
millonithis is important and a lot of people miss this17:23
millonian image recipe defines how an image is built, it will use the artifacts built from other recipes etc, but the way it is implemented it's just a regular recipe17:23
millonithis is important because it means any option set in the image recipe only affects the image recipe17:24
milloniit's a common mistake to set let's say PREFERRED_VERSION_virtual/kernel in the image recipe and think it's going to affect the entire build17:25
milloniso in your specific case we're going to leave the image reciple alone17:26
millonithe other two set global build options17:26
millonii.e distro and machine17:26
milloniwhich means that any option you set there will affect every recipe in the build17:26
millonithis is perfect for making build-wide tweaks17:27
JaMaRP: looks like unzip issue might be caused by broken sstate for it (run out of disk space during the build last night), the interpretrer doesn't look right sysroots-components/x86_64/unzip-native/usr/bin/unzip: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /jenkins, for GNU/Linux 3.2.0, BuildID[sha1]=1c1124606c58a4945ee41e0a453376b9827bb5d7, stripped17:27
millonii said the concept is a bit fuzzy because to a degree it's a matter of convention what should go into distro and what should go into the machine17:27
milloniboth are build-wide17:28
JaMamake interpreter also doesn't look very complete sysroots-components/x86_64/make-native/usr/bin/make: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, for GNU/Linux 3.2.0, BuildID[sha1]=9b552261fb11831ed9c357668beae0962cb967d4, stripped - actually it's in all native binaries I've checked now17:28
marble_visionsmilloni: right17:28
millonimarble_visions: but basically, i assume you've got your bsp for the board that you're using?17:28
millonidoes it define a machine config?17:29
marble_visionsit does17:30
milloniso lets say your machine config is called machine.conf17:30
milloniin your layer, i would create a new machine config17:30
millonirequire conf/machine/machine.conf17:31
milloniand then below that, anything you want to set for you machine17:31
milloninow we just need to figure out how to conditionally include a kernel config fragment based on the machine17:32
marble_visionsi guess the quick and dirty way would be in local.conf with the VAR[var] syntax?17:33
*** WillMiles <WillMiles!> has quit IRC17:33
marble_visionsi see uboot does this with UBOOT_CONFIG[mfg/nand/etc]17:33
milloniim not familair with that syntax17:33
millonii was thinking17:34
milloniSRC_URI += "debug-config.cfg"17:34
milloniSRC_URI += "file://debug-config.cfg"17:34
milloniexpect you would only append that for your debug machine17:34
milloninot sure but i think this may be the syntax:17:35
milloniSRC_URI_myproject-dev += "file://debug-config.cfg"17:35
millonimeans only append to SRC_URI for MACHINE myproject-dev17:35
marble_visionsand i select MACHINE when setting up the build17:36
marble_visionswhen sourcing yocto setup17:36
marble_visionsokay, that looks reasonable17:36
milloniyeah, either in local.conf or by setting MACHINE env variable17:36
millonithe sstate cache will be valid betwen dev and rel builds (most of it)17:37
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has joined #yocto17:37
milloniso you won't have to rebuild too much, and you wont have to maintain separate layers for your debug build17:38
milloniwe went for a separate layer approach on our project and it kind of proved to be a pain in the ass :)17:38
millonii hear zeus has support for building multiple machines at the same time. not sure how that works but it makes it even simpler17:38
millonipotentially you can build both with one command (?)17:39
millonithe latest version of yocto17:39
marble_visionsmilloni: thanks for the info, it puts things into perspective17:40
marble_visionsi'll bet on two mach confs17:40
marble_visionssee where that takes me17:40
milloniim happy to help, unfortunately i've not been able to test that out myself17:40
milloniso i dont know how good it's going to be in practice17:41
millonii'd be interested to hear how it works out for oyu17:41
marble_visionsindeed, i might share in due time17:41
*** WillMiles <WillMiles!> has joined #yocto17:44
*** kanavin__ <kanavin__!~kanavin@> has quit IRC17:44
*** abhiarora44 <abhiarora44!uid396576@gateway/web/> has quit IRC17:45
*** kanavin <kanavin!~kanavin@> has joined #yocto17:46
*** BobPungartnik <BobPungartnik!~BobPungar@> has joined #yocto17:46
*** BobPungartnik <BobPungartnik!~BobPungar@> has quit IRC17:48
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has quit IRC17:53
*** jklare <jklare!~jklare@> has quit IRC17:58
*** pohly <pohly!> has quit IRC18:00
*** mckoan is now known as mckoan|away18:00
*** jklare <jklare!~jklare@> has joined #yocto18:02
*** frsc <frsc!> has quit IRC18:06
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has joined #yocto18:13
*** ericch <ericch!> has quit IRC18:21
*** ericch <ericch!> has joined #yocto18:21
*** Bunio_FH <Bunio_FH!> has joined #yocto18:23
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has quit IRC18:25
*** H-U <H-U!> has joined #yocto18:27
*** H-U <H-U!> has quit IRC18:32
*** pohly <pohly!> has joined #yocto18:34
*** BCMM <BCMM!~BCMM@unaffiliated/bcmm> has quit IRC18:35
*** yann <yann!> has joined #yocto18:36
*** H-U <H-U!> has joined #yocto18:38
*** pohly <pohly!> has quit IRC18:39
*** H-U is now known as hu18:39
*** hu is now known as h-u18:39
*** h-u <h-u!> has left #yocto18:44
*** h-u <h-u!> has joined #yocto18:44
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has joined #yocto19:21
RPLooks like we've destroyed the performance of world builds with hashequiv :(19:23
RP - 7 hours and task 1806 of 2384819:23
*** fray <fray!> has quit IRC19:28
*** orzen <orzen!> has quit IRC19:30
JPEWhuh, thats not really the desired effect19:31
JPEWLots of "checkins sstate mirror object availibilty..."19:32
*** orzen <orzen!> has joined #yocto19:35
*** Ad0 <Ad0!~Ad0@> has quit IRC19:39
*** Ad0 <Ad0!~Ad0@> has joined #yocto19:45
*** tgamblin <tgamblin!~tgamblin@> has quit IRC19:58
*** jklare <jklare!~jklare@> has quit IRC19:59
*** jklare <jklare!~jklare@> has joined #yocto20:02
*** Lihis <Lihis!> has quit IRC20:03
*** T_UNIX <T_UNIX!uid218288@gateway/web/> has quit IRC20:03
*** Lihis <Lihis!> has joined #yocto20:04
*** nerdboy <nerdboy!~sarnold@gentoo/developer/nerdboy> has joined #yocto20:05
RPJPEW: I need to reproduce this under profiling, see where the bottleneck is. Has to be something silly like the log messages20:26
*** fray <fray!> has joined #yocto20:43
*** goliath <goliath!> has quit IRC20:45
*** tgamblin <tgamblin!> has joined #yocto20:46
*** florian <florian!~florian_k@Maemo/community/contributor/florian> has joined #yocto20:49
*** jklare <jklare!~jklare@> has quit IRC20:57
*** jklare <jklare!~jklare@> has joined #yocto21:01
*** Pharaoh_Atem <Pharaoh_Atem!~neal@fedora/ngompa> has quit IRC21:06
rburtonRP: dare i say, turn off hash equiv for m1?21:07
RPrburton: I'm starting to think we may have to :(21:09
*** Pharaoh_Atem <Pharaoh_Atem!~neal@fedora/ngompa> has joined #yocto21:09
RPRunning setscene task 287565 of 1192121:12
JPEWHah! That's impressive21:13
RPJPEW: for 23000 tasks in total, very21:13
RPI think its an accounting error rather than the real number of tasks it ran FWIW21:16
RPbut still looks insane21:16
frayHmm.. 23000 tasks doesn't seem all that large.. but ya, 287k of 23k is a problem.. ;)21:16
RPNOTE: Running setscene task 222166 of 1192121:16
RPNOTE: Running setscene task 222246 of 1192121:16
RPNOTE: Running setscene task 223302 of 1192121:16
RPthat is it incrementing21:17
RPJPEW: I think its the sstate rescans which are slow21:17
JPEWRP: Ya, that's what I was thinking.21:17
RPhalstead: this comes back to the NAS not responding very quickly :(21:18
JPEWWhy are there so many each time?21:18
RPJPEW: I'd guess a world build has lots of endpoints which means lots of rehashes21:18
*** berton <berton!~berton@> has quit IRC21:19
JPEWRP: I'm not too familar with the "checking for object availablilty"; is it just looking for a single file for each message (e.g. just a LOOKUP)21:19
RPJPEW: in this case probably21:20
RPJPEW: that message comes from the hashvalidate function in sstate.bbclass21:20
halsteadRP, I think fixing this will require something other than optimizing the NAS. We could remove more objects or change the directory structure.21:20
RPJPEW: its from the update_scenequeue_data([tid] call in runqueue21:21
RPhalstead: right, we talked about changing the directory structure21:21
RPhalstead: I think ext4 as a filesystem may be faster for this btw but I know that isn't feasible21:22
halsteadRP, or I suppose we could have a dedicated mount point for sstate with caching enabled. Caching lead to races in the past so it's disabled.21:22
RPhalstead: wish I could remember the details of the race issue. I wonder if we fixed that somehow]21:23
JPEWRP: Hmm, we could batch them all together in a single call to update_scenequeue_data. That would allow them to at least happen in parallel21:23
RPhalstead: is it easy to turn on the caching to test?21:23
RPJPEW: yes, we should do that21:23
* JPEW writes a patch21:24
* RP clearly isn't thinking straight as that wasn't clear to me :/21:24
halsteadRP, I don't know the details but there were multiple workers trying to write to the same file because it didn't exist in the cached dir list.21:24
RPhalstead: easy to turn on?21:25
fraythat has been reported by other users as well.. I'm wondering if there needs to be some kind of a lock file for things21:26
RPfray: you need to be specific about what exactly21:26
RPfray: I remember what michael is talking about in this case and I know we changed the code21:27
RPI wonder if we change the sstate write code to do a mv -n, how nfs copes with that. Is the -n a syscall/nfs preserved operation (no-clobber)21:28
fraysstate-cache directory21:28
mischiefis there an easy way to debug why 'apt-get install ...' fails with 'unmet dependencies' in my do_rootfs task for my image?21:28
frayI've been told more then once at the ELC / ELC-E in the last 3 years that people are seeing multiple writes when the original didn't exist.. often nearly at the same time21:28
fray(lesser I've also been told that about the dwonloads directory.. but that was before we fixed a bunch of lock files prior to Zeus)21:29
halsteadRP, requires remounting nfs. Simple to do between builds.21:29
RPhalstead: server or client side?21:30
*** WillMiles <WillMiles!> has quit IRC21:30
RPhalstead: client I'd assume?21:30
halsteadRP client, yep21:30
RPhalstead: could we remount mid build, see if it speeds up?21:31
RPrisk taking is a skill at times, right? :)21:31
RPlooks like we'd want the RENAME_NOREPLACE flag of renameat221:32
halsteadRP, We can try. Shall I go for it?21:33
RPhalstead: yes, which machine/build are you going to change?21:35
RPhalstead: just so I can watch the log output21:35
RPhalstead: looks like the world builds are on centos7-ty-1 and debian8-ty-121:36
halsteadRP, I was going to push to all of them. Starting with just on is a better idea. :)21:36
RPhalstead: yes, lets just try one21:37
*** hpsy <hpsy!~hpsy@> has joined #yocto21:37
halsteadRP, I'll change debian8-ty-121:37
RPhalstead: ok21:38
*** vmeson <vmeson!~rmacleod@> has quit IRC21:39
RPLooking at the processes on the autobuilder I'm doubting this is NFS bound, its burning 95% cpu in cooker21:43
RPhalstead: hmm, I think that has dropped cpu usage21:46
zeddiihas anyone else tried to build a static libcrypt ?21:47
zeddiiI'm trying and failing at the moment.21:47
halsteadRP, I've lazy unmounted /srv/autobuilder and mounted with the new options at the same location. Testing to see if it's working as expected.21:48
RPhalstead: thanks, I think it may be helping, we'll see...21:49
RPzeddii: we disable static libraries so I assume you're disabling that?21:49
*** alessioigor <alessioigor!> has quit IRC21:50
RPJPEW: I think its spending so much CPU in the rehashing code, the rest of runqueue is simply sitting waiting :(21:51
RPnfs is part of it but only a small part :(21:51
zeddiirp: hmm. that might be it, I've never had to worry about this before. so I've never looked. I'll see if I can find the right toggle.21:51
zeddiiI'm just trying to build a static busybox and it is chaining to needing a static libcrypt ..21:52
zeddiiI'll go check the initramfs layers to see if they have something for this already.21:52
RPzeddii: meta/conf/distro/include/no-static-libs.in21:53
zeddiiahah. will go poke around.21:53
RPzeddii: enabling those wastes so much more build time ;-)21:55
*** adelcast <adelcast!~adelcast@> has left #yocto21:55
JPEWRP: Also possible21:55
zeddiiyah. and I just want it for this one package + its depenendencies .. so I'll see if I can figure out another way21:55
RPzeddii: we used to special case sqlite-native for pseudo-native21:56
zeddiiI can see about forcing it on for the libxcrypt build in my distro.21:56
*** jklare <jklare!~jklare@> has quit IRC21:58
*** adelcast <adelcast!~adelcast@> has joined #yocto21:58
RPJPEW: I think I can see how we could unroll the loops at the top of process_possible_migrations() too, at least to have one big recursive rehash rather than multiple21:59
*** jklare <jklare!~jklare@> has joined #yocto22:01
halsteadRP, Caching is now in effect.22:03
JPEWRP: Ah, ya! thats probably causing the server to switch in and out of the fast streaming mode22:04
JPEWIf we can make all the get_unihash() calls occur sequentianlly without intermixing report_unihash_equiv() that would be good22:04
RPJPEW: We're not seeing any _equiv() reports I see in the logs so that isn't the issue22:05
JPEWRP: I saw a few in the log I was looking at22:06
halsteadRP, I have the sstate cache clean disabled. Do we need a way to inform the hashserv about deleted objects before enabling it?22:06
JPEWMaybe not otfen enough to be the culprit though22:06
RPhalstead: I think we've worked around it22:06
RPJPEW: pretty sure its runqueue's code given the pattern22:06
RPJPEW: lines 2264-230922:07
halsteadRP, Nice. So it's fine to turn back on now? I'll also start saving logs to deleted object names.22:07
RPhalstead: yes, thanks22:07
*** dv_ <dv_!> has quit IRC22:08
RPJPEW: I think we can take 2275-2309 out of the top for loop22:08
RPbut I doubt its the only problem22:08
RPJPEW: The local reproducer is to do a big build, then make a change to sstate.bbclass which doesn't change output but does force a complete rebuild where the output will always match22:09
* RP suspects an O2 or worse slowdown22:09
JPEWOk, I'll give that a try. I think I have the batching of the sstate object check sorted out22:10
RPJPEW: I'm talking out loud btw, not saying you need to do this! :)22:11
RPI think I need to reproduce under bitbake -P and check where the time in the loop is spent22:11
JPEWRP: Ya, that's fine :) I figured moving the update out looked easy enough. I wasn't going to look at the rest yet22:12
RPcould be as simple as commenting the logger.debug()22:12
RPwe've had this before22:12
RPJPEW: fixing the mirror check would help a lot as it clearly crazy atm :)22:13
*** hpsy <hpsy!~hpsy@> has quit IRC22:15
*** hpsy <hpsy!~hpsy@> has joined #yocto22:15
JPEWRP: Patch sent if you want to take a look. I have to go to a Christmas program now22:18
*** dv_ <dv_!~dv@> has joined #yocto22:21
RPJPEW: thanks!22:26
RPJPEW: its the get_taskhash and get_unihash calls22:32
RPJPEW: I managed to get a profile22:32
*** fray <fray!> has quit IRC22:39
*** fray <fray!> has joined #yocto22:42
RPJPEW: you won't believe what the problem is :/22:56
*** jklare <jklare!~jklare@> has quit IRC22:59
*** jklare <jklare!~jklare@> has joined #yocto23:03
*** vineela <vineela!~vtummala@> has quit IRC23:11
rburtonRP: g'wan tell23:14
*** rburton_ <rburton_!> has joined #yocto23:17
*** rburton <rburton!> has quit IRC23:19
*** hpsy <hpsy!~hpsy@> has quit IRC23:34
*** mrc3 <mrc3!~mrc3@linaro/mrc3> has quit IRC23:39
*** hpsy <hpsy!~hpsy@> has joined #yocto23:41
*** bluca <bluca!~bluca@> has quit IRC23:42
*** hpsy <hpsy!~hpsy@> has quit IRC23:46
RPrburton_: well, it turned out to be complicated23:58
*** rburton_ <rburton_!> has quit IRC23:58
RPheh, he ran away :)23:59

Generated by 2.11.0 by Marius Gedminas - find it at!