*** sgw_ <sgw_!~sgw_@134.134.139.77> has quit IRC | 00:01 | |
*** Crofton <Crofton!~balister@fw.whitepine.k12.nv.us> has quit IRC | 00:02 | |
*** billr <billr!~wcrandle@134.134.139.83> has quit IRC | 00:02 | |
*** t0mmy_ <t0mmy_!~tprrt@37.167.52.114> has quit IRC | 00:26 | |
*** sameo <sameo!samuel@nat/intel/x-ylusvvipifzesmzd> has quit IRC | 00:45 | |
*** paulg <paulg!~paulg@OTWAON23-3096772825.sdsl.bell.ca> has quit IRC | 00:45 | |
*** tjamison <tjamison!~tjamison@38.104.105.146> has quit IRC | 01:07 | |
*** armpit <armpit!~akuster@50.233.148.158> has quit IRC | 01:11 | |
*** ka6sox is now known as zz_ka6sox | 01:48 | |
-YoctoAutoBuilder- build #848 of nightly-ppc-lsb is complete: Failure [failed Running Sanity Tests] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-ppc-lsb/builds/848 | 02:19 | |
*** RagBal <RagBal!~RagBal@82-168-15-181.ip.open.net> has quit IRC | 02:23 | |
*** Rootert <Rootert!~Rootert@82-168-15-181.ip.open.net> has quit IRC | 02:23 | |
khem | paulg_: dev branches are rebased IIRC, so all one should use for them is AUTOREV | 02:31 |
---|---|---|
*** blueness <blueness!~blueness@gentoo/developer/blueness> has quit IRC | 02:40 | |
*** crankslider <crankslider!~slidercra@unaffiliated/slidercrank> has quit IRC | 02:51 | |
-YoctoAutoBuilder- build #948 of nightly is complete: Failure [failed] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly/builds/948 | 02:58 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has joined #yocto | 03:19 | |
*** zz_ka6sox is now known as ka6sox | 03:28 | |
*** tomz_ <tomz_!tomz@nat/intel/x-aujhhkciodvyhvol> has quit IRC | 03:55 | |
*** tomz_ <tomz_!~tomz@134.134.139.77> has joined #yocto | 04:09 | |
*** ka6sox is now known as zz_ka6sox | 04:53 | |
*** zz_ka6sox is now known as ka6sox | 05:08 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has joined #yocto | 05:18 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has quit IRC | 05:21 | |
*** LocutusOfBorg <LocutusOfBorg!~Gianfranc@ubuntu/member/locutusofborg> has quit IRC | 05:30 | |
*** armpit <armpit!~akuster@2601:202:4001:9ea0:81a:ab00:ca81:5385> has joined #yocto | 05:38 | |
*** agust <agust!~agust@p4FCB5AA0.dip0.t-ipconnect.de> has joined #yocto | 05:42 | |
*** sgw_ <sgw_!~sgw_@134.134.139.82> has joined #yocto | 05:44 | |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has quit IRC | 05:44 | |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has joined #yocto | 05:50 | |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has quit IRC | 05:53 | |
*** Nilesh_ <Nilesh_!uid116340@gateway/web/irccloud.com/x-esogcglcmfrkspqf> has joined #yocto | 05:54 | |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has joined #yocto | 05:54 | |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has quit IRC | 06:07 | |
*** kukowski <kukowski!c2fb77c5@gateway/web/freenode/ip.194.251.119.197> has joined #yocto | 06:13 | |
kukowski | Hi, i have a problem with task in python, how from python function knows that previous task running from cache or not ? | 06:14 |
*** pohly <pohly!~pohly@p57A5603A.dip0.t-ipconnect.de> has joined #yocto | 06:19 | |
*** V12 <V12!c32a382c@gateway/web/freenode/ip.195.42.56.44> has joined #yocto | 06:20 | |
V12 | hi guys | 06:23 |
kukowski | Hi, i have a problem with task in python, how from python function knows that previous task running from cache or not ? | 06:26 |
kukowski | anybody don't know it ? | 06:29 |
kukowski | if do_compile was running the binares are in $D dir but if not binaries are in $SYSROOT-DESTDIR dir, so i have to overwrite path to files, how know that the task was running ? | 06:32 |
*** hamis_lt_u <hamis_lt_u!~irfan@110.93.212.98> has joined #yocto | 06:34 | |
*** Rootert <Rootert!~Rootert@82-168-15-181.ip.open.net> has joined #yocto | 06:34 | |
*** RagBal <RagBal!~RagBal@82-168-15-181.ip.open.net> has joined #yocto | 06:35 | |
kukowski | if do_compile was running the binares are in $D dir but if not binaries are in $SYSROOT-DESTDIR dir, so i have to overwrite path to files, how know that the task was running ? | 06:36 |
V12 | kukowski: you can make your task dependant of do_compile | 06:37 |
kukowski | i have to do it in my python function | 06:38 |
V12 | addtask your_py_func after do_compile | 06:39 |
V12 | python your_py_func() {} | 06:40 |
V12 | should do the trick | 06:40 |
kukowski | i can't overwrite file with do_compile function, i have to know that do_compile was running from my another file with run_ptest task | 06:44 |
*** csanchezdll <csanchezdll!~user@galileo.kdpof.com> has joined #yocto | 06:46 | |
*** MWelchUK <MWelchUK!~martyn@host81-135-119-51.range81-135.btcentralplus.com> has quit IRC | 06:48 | |
*** qt-x <qt-x!~Thunderbi@217.10.196.2> has joined #yocto | 06:49 | |
*** MWelchUK <MWelchUK!~martyn@host81-135-119-51.range81-135.btcentralplus.com> has joined #yocto | 06:49 | |
*** rob_w <rob_w!~bob@unaffiliated/rob-w/x-1112029> has joined #yocto | 06:55 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has joined #yocto | 06:57 | |
*** zeddii_home <zeddii_home!~zeddii_ho@CPEe8de27b71faa-CMbcc810032faf.cpe.net.cable.rogers.com> has quit IRC | 06:58 | |
*** zeddii_home <zeddii_home!~zeddii_ho@CPEe8de27b71faa-CMbcc810032faf.cpe.net.cable.rogers.com> has joined #yocto | 06:59 | |
*** melonipoika <melonipoika!~jose@194.9.252.237> has quit IRC | 07:04 | |
*** oan <oan!~oan@c83-254-9-28.bredband.comhem.se> has quit IRC | 07:04 | |
*** rajm <rajm!~robertmar@82-70-136-246.dsl.in-addr.zen.co.uk> has joined #yocto | 07:12 | |
*** townxelliot <townxelliot!~ell@176.249.240.35> has joined #yocto | 07:14 | |
*** melonipoika <melonipoika!~jose@194.9.252.237> has joined #yocto | 07:15 | |
*** present <present!~present@46.218.87.184> has joined #yocto | 07:19 | |
*** sno <sno!~sno@62.157.143.22> has joined #yocto | 07:21 | |
*** __karthik <__karthik!~karthik@192.91.75.29> has quit IRC | 07:21 | |
*** __karthik <__karthik!~karthik@192.91.75.29> has joined #yocto | 07:22 | |
*** V12 <V12!c32a382c@gateway/web/freenode/ip.195.42.56.44> has quit IRC | 07:26 | |
*** florian <florian!~fuchs@Maemo/community/contributor/florian> has joined #yocto | 07:40 | |
*** rburton <rburton!~Adium@home.burtonini.com> has joined #yocto | 07:44 | |
*** boucman_work <boucman_work!~boucman@229.29.205.77.rev.sfr.net> has joined #yocto | 07:54 | |
boucman_work | hey all | 07:55 |
*** yann <yann!~yann@85-171-21-92.rev.numericable.fr> has joined #yocto | 07:56 | |
*** Nilesh_ <Nilesh_!uid116340@gateway/web/irccloud.com/x-esogcglcmfrkspqf> has quit IRC | 07:57 | |
*** toscalix <toscalix!~toscalix@80.91.95.202> has joined #yocto | 07:59 | |
*** rburton <rburton!~Adium@home.burtonini.com> has joined #yocto | 07:59 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has quit IRC | 08:02 | |
*** Biliogadafr <Biliogadafr!~pin@nat3-minsk-pool-46-53-182-183.telecom.by> has joined #yocto | 08:07 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has joined #yocto | 08:08 | |
*** crankslider <crankslider!~slidercra@unaffiliated/slidercrank> has joined #yocto | 08:09 | |
*** jku <jku!jku@nat/intel/x-sxdramddefdpcedp> has joined #yocto | 08:10 | |
boucman_work | how does one regenerate the bitbake documentation from the XML ? | 08:12 |
*** sameo <sameo!samuel@nat/intel/x-dgizzzzkumbikykh> has joined #yocto | 08:21 | |
*** ftonello <ftonello!~felipe@81.145.202.106> has quit IRC | 08:22 | |
*** ftonello <ftonello!~felipe@81.145.202.106> has joined #yocto | 08:23 | |
LetoThe2nd | boucman_work: see the bitbake/doc directory in poky, and the readme there. should come with a makefile. | 08:25 |
boucman_work | LetoThe2nd: thx, found it... as usuall I asked my question to fast :( sorry about that | 08:26 |
LetoThe2nd | np | 08:27 |
*** khem <khem!~khem@unaffiliated/khem> has quit IRC | 08:28 | |
sveinse | I guess most of you end up using multiple layers from multiple sources. Out of curiosity, what do you use to manage the sources? | 08:32 |
*** khem <khem!~khem@unaffiliated/khem> has joined #yocto | 08:32 | |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has joined #yocto | 08:34 | |
boucman_work | sveinse: you mean the layers ? | 08:34 |
boucman_work | I mainly do it manually, we do have a custom tool but i'm not very found of it | 08:35 |
boucman_work | I would recommand having a look at bitbake-layers and repo... | 08:35 |
boucman_work | repo seems the most common tool to save a yocto build layout... | 08:35 |
sveinse | Yes, I am looking at repo. We use it for other collection of git repos | 08:36 |
LetoThe2nd | sveinse: mirror all upstream external sources on our own machines, couple of shell scripts to regenerate project structures. | 08:36 |
sveinse | But you'd need some tool to configure build, such as setting the right dirs in bblayers.conf | 08:36 |
LetoThe2nd | sveinse: repo can fill in for the latter, but we consider it bloat and missing some features, at least at the time we looked at it. | 08:37 |
sveinse | LetoThe2nd: It seems a lot do make such custom scripts. I've seen three variants in three companies now | 08:37 |
LetoThe2nd | sveinse: yeah, our scripts do all that. complete rebuild of local.conf and bblayers.conf, including checkouts of all needed sources with specific states :-) | 08:37 |
sveinse | (and we'll probably make our own as well) | 08:38 |
LetoThe2nd | sveinse: probably there's 30, 300, or even 3000 of them, | 08:38 |
boucman_work | yeah so does our script... that's the easy part. the hard part is dealing with the various use-cases of each dev, regenerating the XML description of the layout and uploading it... | 08:38 |
LetoThe2nd | sveinse: in case you're interested, ours is public https://github.com/LetoThe2nd/blubber | 08:38 |
LetoThe2nd | sveinse: a major rewrite is underway, but nothing yet to show. | 08:39 |
sveinse | LetoThe2nd: thanks. BTW, nice Dune reference :P | 08:40 |
LetoThe2nd | sveinse: :-) | 08:40 |
boucman_work | so far, repo is the best tool I have found, though I'd agree with LetoThe2nd about it being not perfectly fitted for yocto. | 08:42 |
boucman_work | it does have some really cool features though. If I had to write a new tool, i'd definitely base it on repo or on similar concepts | 08:42 |
sveinse | This is perhaps an idea for the Yocto SC (if there is one) to set some tool/precendence? | 08:43 |
sveinse | SC = Steering Committee | 08:43 |
*** belen <belen!~Adium@134.134.139.82> has joined #yocto | 08:43 | |
LetoThe2nd | the discussion has been there several times. FWIW, the general opinion is "use what fits your bill most, for many people it seems to be repo" | 08:44 |
*** oan <oan!~oan@c83-254-9-28.bredband.comhem.se> has joined #yocto | 08:44 | |
boucman_work | I read it as Summer of Code :P | 08:44 |
LetoThe2nd | i read it as SupaCrowd | 08:45 |
boucman_work | yeah, I don't like that conclusion. There are dozen of variant of that tool, basically one has been developed by each BSP provider | 08:45 |
boucman_work | most are heavily broken shell scripts that break whenever you don't do it exactly the way it's meant to be | 08:45 |
boucman_work | and there would be really added value to yocto if the project used its "authority" to standardize on one tool... even if the tool is not perfect | 08:46 |
boucman_work | sometime a bad answer is better than no answer at all | 08:46 |
LetoThe2nd | i personally think its a better opinion than setting so new preference, which would be absolutely prone to be https://xkcd.com/927 one more time | 08:46 |
sveinse | Yes, but, the downside is loss of consistency. I'm working in a company that makes end user products, and we use Yocto as a technical base for those products. But often we use subcontractors to supply BSP support. We see a lot of different practices when it comes to coding, and that we then later have to unify those efforts. | 08:46 |
*** kukowski <kukowski!c2fb77c5@gateway/web/freenode/ip.194.251.119.197> has quit IRC | 08:46 | |
LetoThe2nd | boucman_work: mind, the conclusion is not "invent something new", but "here, lot at this couple of projects, they use repo." | 08:47 |
boucman_work | sveinse: that why I think the "authority" part is important. The idea is not to provide a better tool but an authoritative tool | 08:47 |
sveinse | yes, defacto working practices work equally well imho | 08:48 |
LetoThe2nd | tools should convince by quality, not by authority. i strongly disagree here. | 08:48 |
boucman_work | repo could be the answer... it does have a couple of limitations (having the manifest forced to be in a git repo is one of them, forced autoupdate from google servers is another one) but it is the best tool I know of currently | 08:48 |
boucman_work | LetoThe2nd: you base your assumption on the idea that BSP providers know how to use yocto/can assert what a good tool is | 08:49 |
boucman_work | I agree on principle, but I deal with that mess everyday, so I have to disagree in practice | 08:50 |
sveinse | No, I'm not too fond of authority either. Because authority means that you can foresee all the use cases for it, and that is hardly the case. | 08:50 |
LetoThe2nd | boucman_work: whereas you base your assumption on that some steering committee, somewhere else can judge what should fit the workflow of a majority of people. | 08:50 |
sveinse | Doing Yocto ports is a proof of that, as I daily encounter something that evidently is not thought of | 08:50 |
LetoThe2nd | boucman_work: both have up- and downsides. | 08:51 |
LetoThe2nd | both are assumptions. | 08:51 |
boucman_work | agreed | 08:51 |
*** gtristan <gtristan!~tristanva@82-70-136-246.dsl.in-addr.zen.co.uk> has joined #yocto | 08:51 | |
LetoThe2nd | :-) | 08:51 |
boucman_work | that's why they are opinions and not decision :P | 08:51 |
*** mortderire <mortderire!~rkinsell@134.134.139.82> has joined #yocto | 08:52 | |
boucman_work | one feature that I really miss on repo is the ability to specify manifest in other thing than git repo... a local directory with the same layout, a tar.gz, or even being able to build a version of repo.py which would self-contain the manifest... that would be awesome | 08:53 |
sveinse | (hmm. I had expected that krogoth has support for ubuntu 16.04 :o) | 08:56 |
rburton | sveinse: what's the problem? | 08:57 |
sveinse | rburton: No problem, just a note about the warning | 08:57 |
rburton | ah, should work then | 08:57 |
rburton | unlike fedora 24 which needs patches (point release coming) | 08:58 |
rburton | SANITY_TESTED_DISTROS="" is your friend :) | 08:58 |
rburton | (or write your own distro config, poky is an example) | 08:58 |
*** mortderire <mortderire!~rkinsell@192.55.54.44> has joined #yocto | 08:58 | |
*** jubr <jubr!57ed1b9e@gateway/web/freenode/ip.87.237.27.158> has joined #yocto | 09:00 | |
sveinse | Are versions of the various layers tightly bound to each others? | 09:01 |
sveinse | I mean | 09:01 |
rburton | if you're using krogoth oe-core then you want to track krogoth of each other branch | 09:01 |
jubr | Hi guys, I have a bitbake dynamic configuration loading question for y'all, anybody not idling in here? :) | 09:01 |
rburton | jubr: ask, don't ask to ask | 09:01 |
jubr | true, it has been years since my last IRC experience, I'm a little rusty :) | 09:02 |
jubr | true, it has been years since my last IRC experience, I'm a little rusty :) | 09:03 |
sveinse | I just downloaded krogoth to be able to test qemu/intel build, and I see that the originial meta-qt5 is at jethro. The problem being that I see that we have a lot of local patches to it. To poky, meta-qt5 and meta-openembedded | 09:04 |
sveinse | My goal objective is to change our application from one BSP to another | 09:05 |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has quit IRC | 09:06 | |
rburton | why dont you stick with jethro? | 09:06 |
sveinse | rburton: Well, it was the missing 'sdl' thing against qemu | 09:06 |
*** mortderire <mortderire!~rkinsell@192.55.54.44> has quit IRC | 09:07 | |
rburton | jethro 2.0.2 is fixed for that | 09:07 |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has joined #yocto | 09:07 | |
sveinse | rburton: yeah, but I still need to update a patched poky. Which is equally much work as 2.1 | 09:08 |
rburton | well not really, if you're on 2.0.0 then the migration to 2.0.2 is mostly trivial. also, this is why i suggest not patching oe-core/poky directly... | 09:08 |
jubr | I have created an internal release management tool that allows us to maintain opkg feeds with hand-picked software components for our software releases. I've added functionality that writes a release.conf file with the same PN+SRCREV information for each component. The goal is to be able to build an image with these specific software components in them. I'm trying to http fetch + load this .conf file from inside OE. | 09:09 |
sveinse | I'm attempting intel/qemu, because I'd hoped that this could serve as a generic playground for porting the app before attepting the real HW BSP | 09:09 |
rburton | absolutely sensible | 09:10 |
*** benjamirc1 <benjamirc1!~besquive@134.134.139.72> has quit IRC | 09:10 | |
rburton | intel and qemu targets have always worked, just grab meta-intel and use the right branch if you want to boot on real x86 | 09:10 |
*** mortderire <mortderire!~rkinsell@192.55.54.44> has joined #yocto | 09:10 | |
jubr | But I'm running into problems when dynamically loading the release.conf - since it does not yet exist at OE-start time. I've added caching. But now the conf information only seems to be available in the Recipe Parse phase, it does not seem to reach the worker threads. | 09:10 |
*** benjamirc <benjamirc!~besquive@134.134.139.72> has joined #yocto | 09:11 | |
sveinse | rburton: cool | 09:12 |
*** crankslider <crankslider!~slidercra@unaffiliated/slidercrank> has quit IRC | 09:12 | |
jubr | I've `release_conf_fetch_and_cache[eventmask] = "bb.event.ParseStarted"` and then use `bb.parse.handle(release_conf_cache_path, d, True)` to dynamically load it | 09:13 |
sveinse | rburton: We have like 10 patches or so on poky. Most about bugs/features in poky that does not play well on the specific HW | 09:13 |
sveinse | But I definitely see the issue here. I'm now stuck when considering other vendors/BSPs | 09:14 |
rburton | sveinse: vendors which don't track oe-core releases are a right pain | 09:15 |
rburton | *but* if you have a final target BSP then use the release they support | 09:15 |
rburton | qemu and real intel will build on every release | 09:15 |
rburton | hopefully you cloned poky and branched? | 09:16 |
*** mortderire <mortderire!~rkinsell@192.55.54.44> has quit IRC | 09:17 | |
*** belen <belen!~Adium@134.134.139.82> has quit IRC | 09:17 | |
sveinse | No, it does not seem that way. And yes, I will direct the ranting in their direction, not here. So for the record, I'm not asking here to fix/undo what has been done. I just need humble advice on how to move forward. | 09:18 |
*** mortderire <mortderire!~rkinsell@134.134.137.75> has joined #yocto | 09:18 | |
*** belen <belen!Adium@nat/intel/x-vhdpbgqisafviwik> has joined #yocto | 09:19 | |
rburton | clone poky, branch it where your BSP starts from, copy your patches in, commit, rebase to the latest stable release to get the sdl/qemu fixes, party. | 09:19 |
*** mortderire <mortderire!~rkinsell@134.134.137.75> has quit IRC | 09:20 | |
*** mortderire <mortderire!~rkinsell@134.134.137.75> has joined #yocto | 09:22 | |
*** jku <jku!jku@nat/intel/x-sxdramddefdpcedp> has quit IRC | 09:22 | |
*** justanotherboy1 <justanotherboy1!~mlopezva@134.134.139.83> has joined #yocto | 09:22 | |
*** justanotherboy <justanotherboy!mlopezva@nat/intel/x-bqtbozurkpegxryo> has quit IRC | 09:23 | |
sveinse | I have a set of layers that has been modified to fit one particular BSP and system. system might be our requirements to the available software. When moving to another HW and BSP I can do one of two things: I can either start with a clean slate and work off official releases. Then I fear that I have to redo and relearn all the quirk and patches that has already been implemented into the... | 09:24 |
sveinse | ...former BSP. The other option is to build on the previous BSP, but that gives on a strong tie-in to what has already been done, and it already being surpassed by newer versions. | 09:24 |
sveinse | I'd hoped on the latter, as it gives us freedom to move across BSP, but I fear the unknown amount of work that it might imply | 09:25 |
jubr | rburton/sveinse: interesting BSP/release discussion, we're currently doing something similar with i.MX6 + Freescale's official Yocto release | 09:30 |
jubr | I've managed to keep most changes in our own BSP meta layers with .bbappend and 2 or 3 .bbclass overrides. | 09:32 |
sveinse | jubr: this is in fact a imx6 bases BSP too, custom HW thou | 09:38 |
boucman_work | based on what board ? | 09:38 |
sveinse | with Qt5 for graphics, and that certainly increases the complexity by a few notches | 09:39 |
jubr | Ours too, i.MX6 SoloX | 09:39 |
jubr | sveinse: did you bae on the latest Freescale release? Do you have grapics acceleration on your board? | 09:40 |
sveinse | boucman_work: something called pandaboard I wonder. I have always been working with our custom HW, so I don't really know | 09:40 |
boucman_work | ok, i've never used taht one, I don't know how good their BSP would be | 09:40 |
jubr | Ours is based on imx6sxsabresd ref design. | 09:41 |
* boucman_work uses a custom board based on the apalis by toradex | 09:41 | |
boucman_work | but we don't use their BSP, we use what they provide in meta-fsl-extra | 09:41 |
jubr | ah, yes, saw a ref to it in there somewhere. Includes a custom kernel I believe | 09:41 |
boucman_work | yes, but it works when you do your own build based on poky and just changing MACHINE so I'm quite happy | 09:42 |
sveinse | jubr, hmm, sabre, that sounds awfully familiar. | 09:42 |
jubr | rburton: know anything about Bitbake internal .conf parsing with bb.parse.handle()? | 09:43 |
* boucman_work has some very bad memory of a BSP that would download/rebuild a standard poky for another board, untar the rootfs, override the kernel and a couple of other stuff, then build a SD image based on that | 09:43 | |
sveinse | jubr, I honestly don't know what version, as we put the porting and implementation to a subcontractor. | 09:43 |
boucman_work | if was horrible | 09:43 |
sveinse | I'm just getting my hands wet with Yocto.... | 09:43 |
boucman_work | that's the sort of stuff that makes me think that standardizing on a deployment tool would be a good idea :P | 09:43 |
sveinse | jubr, but we do have gfx acceleration for Qt | 09:43 |
jubr | http://www.nxp.com/products/software-and-tools/hardware-development-tools/sabre-development-system/sabre-board-for-smart-devices-based-on-the-i.mx-6solox-applications-processors:RD-IMX6SX-SABRE | 09:44 |
boucman_work | sveinse: sabrelite and sabreauto are the reference dev board for i.mx designed by freescale | 09:44 |
boucman_work | or am I messing up with the nitrogen, I tend to mix those two | 09:44 |
jubr | yep, now aqcuired by NXP | 09:44 |
*** mortderire <mortderire!~rkinsell@134.134.137.75> has quit IRC | 09:45 | |
sveinse | jubr, bases on sabresd | 09:45 |
-YoctoAutoBuilder- build #849 of nightly-ppc-lsb is complete: Success [build successful] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-ppc-lsb/builds/849 | 09:46 | |
jubr | sveinse, ah that's good, which kernel ver? 3.14.38, 3.14.52? I saw they have a new 4.1.15 release out. We're still working with .52 here. | 09:46 |
jubr | boucman_work: Freescale's BSP is actually quite good I think. Full support for all the featues on all of their i.MX6* boards from one bsp release. Nicely done. | 09:48 |
sveinse | jubr: .38 | 09:48 |
sveinse | have any of you been working with Veriscite boards? | 09:48 |
sveinse | That is the BSP I'm porting my app to... | 09:48 |
jubr | sveinse: we had someone from NXP port our .38 changes to .52 release. It's doable. I wonder how much work migrating to 4.1 will take... *sigh* | 09:49 |
jubr | sveinse: nope | 09:49 |
boucman_work | I did, but I don't remember what project it was... | 09:49 |
*** mortderire <mortderire!~rkinsell@192.55.54.43> has joined #yocto | 09:49 | |
boucman_work | I think the BSP was quite messy, but I don't remember the specifics... | 09:50 |
boucman_work | AM33 not their imx boards | 09:50 |
sveinse | Ah, ok, we're looking at the imx6 based boards. We're peeking at the 6UL architecture | 09:51 |
boucman_work | might be cleaner, then... | 09:52 |
jubr | We settled for the 6SX, we needed the ADC's + gfx. Now have 512M RAM + 4G eMMC. /me happy :) | 09:52 |
sveinse | Yeah, we're not sure the 6UL has enough juice for what we need it to do. So it's a concept test | 09:53 |
sveinse | haha, with reference to our previous discussion about repo/layer management. Variscite apparently have their own. | 09:58 |
sveinse | I think I'll name my tool yalcmt. yet another layer configuration management tool | 09:58 |
jubr | sveinse: nice | 09:58 |
jubr | I'm starting to appreciate google repo. Now if we would only start to use sha's in our manifest instead of moving-target-master refs :) | 09:59 |
*** belen <belen!Adium@nat/intel/x-vhdpbgqisafviwik> has quit IRC | 10:04 | |
* CTtpollard uses submodules | 10:04 | |
CTtpollard | mainly because nty google | 10:04 |
sveinse | for scm that's nice, but I suppose you have script for build conf as well? | 10:09 |
CTtpollard | yes | 10:10 |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has quit IRC | 10:12 | |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has joined #yocto | 10:13 | |
*** belen <belen!~Adium@134.134.139.76> has joined #yocto | 10:18 | |
*** clopez <clopez!~tau@neutrino.es> has quit IRC | 10:29 | |
*** clopez <clopez!~tau@neutrino.es> has joined #yocto | 10:34 | |
jubr | @all: anybody here know anything about Bitbake internal .conf parsing with bb.parse.handle()? | 10:43 |
sveinse | Does any of you use external sources? Our software is a humongous hg repository, and (on 2.0.1 at least) hg support seems iffy. But when I attempted to use EXTERNALSRC on 2.1 yocto stops at: Failure expanding variable do_compile[file-checksums], expression was ${@srctree_hash_files(d)} which triggered exception IOError: [Errno 2] No such file or directory: '/home/nosse1/yocto/sp/repos/sp/.git/inde | 10:43 |
sveinse | x' | 10:44 |
sveinse | So it apparently wants it to be a git repo | 10:44 |
*** JaMa <JaMa!~martin@ip-89-176-104-169.net.upcbroadband.cz> has joined #yocto | 10:46 | |
*** mortderire <mortderire!~rkinsell@192.55.54.43> has quit IRC | 11:00 | |
boucman_work | bitbake patch submitted, hopefully will get in | 11:02 |
* boucman_work <= noob enthusiasm | 11:02 | |
* jubr is appreciative :) | 11:03 | |
*** mortderire <mortderire!rkinsell@nat/intel/x-ijsglcuqpefbrqse> has joined #yocto | 11:03 | |
*** mortderire <mortderire!rkinsell@nat/intel/x-ijsglcuqpefbrqse> has quit IRC | 11:07 | |
*** mortderire <mortderire!rkinsell@nat/intel/x-lqkcneibsmgsomxe> has joined #yocto | 11:11 | |
sveinse | Sorry to repeat, but how do I get past this: Failure expanding variable do_compile[file-checksums], expression was ${@srctree_hash_files(d)} which triggered exception IOError: [Errno 2] No such file or directory: '/home/nosse1/yocto/sp/repos/sp/.git/index' | 11:13 |
* sveinse giving up | 11:23 | |
jubr | sveinse: does not ring a bell, sorry | 11:24 |
jubr | PS did you see http://lists.openembedded.org/pipermail/openembedded-core/2016-March/119334.html | 11:24 |
sveinse | Ah, I think I know what it is. We use hg, and some submodules that happens to be git repos. hg has some configs in .git, which later makes BB think its a .git repo. | 11:26 |
sveinse | Isn't it lovely? | 11:26 |
boucman_work | sveinse: that is a complicated question, chances are very few people can answer :( | 11:26 |
* jubr my pleasure :) | 11:26 | |
Ulfalizer | sveinse: yeah, i think the problem is that your repo has a .git/ in it, but srctree_hash_files() can't find the files it expects in it | 11:27 |
boucman_work | sveinse: tbh, a .git/ directory is the standard way of detecting a git repo :P | 11:27 |
jubr | Same as my intro + question above. No-one knows. | 11:27 |
* jubr *sniffs | 11:27 | |
Ulfalizer | sveinse: you'll find the source for srctree_hash_files() at the bottom of meta/classes/externalsrc.bbclass btw | 11:27 |
sveinse | yup, but I think I have the answer | 11:27 |
sveinse | Ulfalizer: thanks, I'll look at it | 11:27 |
sveinse | I think this is rather ironic. Didn't we start out the discussions today that we should try to avoid patching poky? Well, it seems, that there are lots of cases where its unavoidable :P | 11:29 |
sveinse | No pun intended, thou | 11:30 |
jubr | I know. I even have a custom base.bbclass with patch to make the files do_fetch writes in ${DL_DIR} group writeable | 11:34 |
jubr | I've set up a shared OE build env for our developers with shared DLs+ sstate. Hugely speeds up dev time + space reduction. | 11:35 |
sveinse | cool | 11:36 |
*** berton <berton!~fabio@177.100.227.79> has joined #yocto | 11:36 | |
sveinse | jubr, btw, if you have a CI server, or a build server, how far back do you take each build to ensure consistency? E.g. Do you wipe tmp/ | 11:37 |
sveinse | We need to set up some similar scheme for sstate cache from a central build server. IT is already on my back for completely blowing up the requirements for the developer machines when working with Yocto. | 11:40 |
jubr | jenkins, wip, currently not done yet. At least all releases are built with it though. I should make a second job that does a clean rebuild every now + then. | 11:41 |
* jubr is wondering if there is such a thing as SSTATE_EXLUDE = "recipe1 recipe2"? | 11:41 | |
jubr | sstate mirror stuff exists though | 11:42 |
*** mortderire <mortderire!rkinsell@nat/intel/x-jrztclznfjwlfkrn> has joined #yocto | 11:43 | |
*** Cwiiis <Cwiiis!sid227@gateway/web/mozilla/x-egnpsoosebuveuax> has joined #yocto | 11:50 | |
Cwiiis | Trying to build a recipe for a kernel module, I keep falling over because version.h isn't in the kernel src directory and it ends up using the host system's (which in this case, doesn't match at all) - I can see the generated version.h in the work directory, but it's not in the work-shared directory... Anyone know what's up with that? | 11:51 |
rburton | zeddii zeddii_home ^ | 11:54 |
Cwiiis | well, when I say I see it in the work directory, I see it in the sysroot directory for an image that ends up with it and I see it in the work directory of linux-libc-headers | 11:54 |
boucman_work | rburton: architecture question : I want to add a way to expand bb variables in target files (typically they come from "SRC_URI=file://" and end somewhere on the target's "/etc" | 11:56 |
boucman_work | so somewhere on the do_fetch => do_unpack => do_patch chain | 11:56 |
boucman_work | should I try to do that in bitbake, somewher in fetcher2 or should I keep it yocto specific in the way do_patch is in patch.bbclass ? | 11:57 |
rburton | latter | 11:57 |
rburton | i'd write a class that adds a task between patch and compile | 11:58 |
rburton | if you had lots of recipes that did the same thing | 11:58 |
jubr | rburton: does a way exists to manipulate recipe's bb.data from inside a handler? | 11:59 |
boucman_work | ok, thx | 11:59 |
jubr | I just found out event.py's execute_handler() does `del event.data` :'( | 11:59 |
jubr | in a way that it actually propagates to the tasks | 12:02 |
*** jkroon_ <jkroon_!~jkroon@fw.mikrodidakt.se> has joined #yocto | 12:02 | |
*** Ulfalizer <Ulfalizer!~ulf@217.89.178.116> has quit IRC | 12:06 | |
jkroon_ | Hi. It seems like when I do incremental generation of my image's SDK with -c populate_sdk, each time the SDK gets a little bigger. I'm building the SDK with the kernel-devsrc package, and in the target's sysroot I have a pretty big /usr/src/kernel/.kernel-meta/ directory. I'm wondering if that could/should be discarded ? | 12:08 |
jkroon_ | (This is with pretty recent poky-master) | 12:09 |
-YoctoAutoBuilder- build #607 of nightly-oe-selftest is complete: Success [build successful] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-oe-selftest/builds/607 | 12:16 | |
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has quit IRC | 12:22 | |
rburton | jkroon_: that doesn't seem right | 12:22 |
*** marka <marka!~marka@128.224.252.2> has joined #yocto | 12:40 | |
*** jchonig <jchonig!~quassel@firewall.honig.net> has quit IRC | 12:41 | |
*** jchonig <jchonig!~quassel@firewall.honig.net> has joined #yocto | 12:42 | |
*** anselmolsm <anselmolsm!~anselmols@192.55.55.41> has joined #yocto | 12:42 | |
-YoctoAutoBuilder- build #258 of nightly-checkuri is complete: Failure [failed BuildImages] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-checkuri/builds/258 | 12:45 | |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has quit IRC | 12:57 | |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has joined #yocto | 12:58 | |
CTtpollard | what's the cleanest way to set up a local source mirror for multiple builds to share? I know for a single pipeline the easiest method is to populate it with the output of an initial download_dir, but doing this for 'x' targets will create a lot of crossover | 13:02 |
neverpanic | run all x targets with $DL_DIR pointing to the same folder? | 13:03 |
jkroon_ | zeddii, ping | 13:05 |
*** paulg <paulg!~paulg@128.224.252.2> has joined #yocto | 13:08 | |
*** cference <cference!~cference@64.187.189.2> has joined #yocto | 13:13 | |
rburton | CTtpollard: bitbake world -c fetchall | 13:16 |
CTtpollard | neverpanic: ok, so share dl_dir & source_mirror in each local.conf? | 13:17 |
rburton | but yeah, if they're all local, then they can share DL_DIR | 13:17 |
rburton | (and SSTATE_DIR) | 13:17 |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has quit IRC | 13:17 | |
CTtpollard | ok, well I'm currently inheriting the local mirror, I'll also share dl_dir as the same location | 13:17 |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has joined #yocto | 13:18 | |
jkroon_ | rburton, I may have spoke to soon. I did a -c clean -c cleansstate virtual/kernel and kernel-devsrc, rebuilt the SDK and it still comes out as big as before. hmm. | 13:21 |
boucman_work | rburton: are you admin on the bitbake ML ? or should I wait for RP to be around ? | 13:22 |
sveinse | disk is certainly a premium when working with yocto. How can I clean a build when an image is done, yet allow me to work on it? If I have understood things correctly, sstate will cache most packages, right? | 13:22 |
rburton | boucman_work: no, yes. | 13:23 |
RP | boucman_work: I did look at your patch you pasted here yesterday and looked good | 13:23 |
neverpanic | rburton: CTtpollard: I'd advise against bitbake -c fetchall, because that just fetches but doesn't check whether what you have fetched actually builds, so you might populate your download cache with broken files. | 13:23 |
rburton | sveinse: if you find yourself running out of disk, you can inherit rm_work so it will delete the work directory when its finished building a recipe. | 13:23 |
RP | boucman_work: I'd just want to check with kergoth about whether that is the right naming for the syntax | 13:23 |
rburton | sveinse: (or get a bigger disk) | 13:23 |
CTtpollard | neverpanic: yeh that's a good point to consider actually | 13:24 |
neverpanic | sveinse: you can always safely rm -rf tmp/ without loosing too much time, since you still have sstate | 13:24 |
neverpanic | CTtpollard: We've seen it a couple of times where downloads failed and files in our download cache wouldn't even extract properly anymore | 13:24 |
neverpanic | In theory, that should neverâ„¢ happen. | 13:25 |
*** igor2 <igor2!~igor@189.112.127.225> has joined #yocto | 13:25 | |
sveinse | rburton: (I wish I could get bigger disk. But company policy and all that. Have to use branded disks under the manufacturers reccomendation is a must, and I'm maxed out already. -- and I'm not ever going to go over to spin drives again.) | 13:26 |
rburton | sveinse: i still endorse spinning rust for build disks as they're so much larger and given enough ram the performance improvement is actually negliab;e | 13:26 |
sveinse | I wonder if I could get away with an external USB3 drive... | 13:28 |
sveinse | hmm, the drives they put into these are probably too slow thou. 5400s as such | 13:28 |
*** Ulfalizer <Ulfalizer!~ulf@217.89.178.116> has joined #yocto | 13:28 | |
boucman_work | RP: I just subscribed to the bitbake ml and posted my patch | 13:30 |
boucman_work | problem is, my patch is curently in the moderator queue because my company was bought so our email adresses changed, and I goofed up my git-send-email config | 13:30 |
boucman_work | just wanted to point it out so there is no mess-up | 13:30 |
boucman_work | (if you our kergoth don't like the syntax, no problem) | 13:31 |
sveinse | Heh, my attempt to build our Qt code on qmeu worked. What didn't work is execution. It crashes. So, someone is going to say "I told you so", but all I need to generate USB images for intel and to native boot is to pull in meta-intel ? | 13:31 |
jubr | neverpanic: We have a patched base.bbclass to make the files do_fetch writes in ${DL_DIR} group writeable, so it can be used by multiple developers | 13:31 |
*** Ulfalizer <Ulfalizer!~ulf@217.89.178.116> has quit IRC | 13:32 | |
rburton | sveinse: throw enough ram at the problem and you'd be surprised how little the disk is used | 13:33 |
boucman_work | jubr: I use local mirror rather than shared download... not sure how well shared downloads work... | 13:34 |
rburton | sveinse: add meta-intel, use intel-corei7-64 or -core2-32 machine as appropriate, copy the resulting .hddimg file to a usb stick | 13:35 |
sveinse | rburton: Thanks | 13:35 |
jubr | boucman_work: pretty well, actually, with the umask patch. Note that changing base.bbclass will trigger a full rebuild, including cross-compiler + libc. | 13:36 |
sveinse | rburton: So all of these intel boards/machines have equal semblence to standard PC HW, right? | 13:39 |
rburton | depends what you mean | 13:39 |
rburton | the intel-core* BSPs target "standard" machines | 13:39 |
rburton | minnow counts as standad | 13:39 |
rburton | edison and galileo are a bit more special | 13:39 |
sveinse | rburton: it't nice to be intel in this respect :P Our systems /are/ standard by definition :P | 13:40 |
sveinse | I'll probably used some old washed out laptop for testing, so I corei7-64 is probably too new. core2-32 it is | 13:42 |
*** sgw_ <sgw_!~sgw_@134.134.139.82> has quit IRC | 13:46 | |
*** belen <belen!~Adium@134.134.139.76> has quit IRC | 13:50 | |
*** belen <belen!~Adium@134.134.139.76> has joined #yocto | 13:50 | |
*** fledermaus <fledermaus!~vivek@2a00:1098:5:0:ccb0:826a:ef2d:64d1> has joined #yocto | 13:52 | |
*** townxelliot <townxelliot!~ell@176.249.240.35> has quit IRC | 13:56 | |
*** madisox <madisox!~madison@12.30.244.5> has joined #yocto | 13:58 | |
*** lamego <lamego!~jose@134.134.139.77> has joined #yocto | 13:59 | |
*** townxelliot <townxelliot!~ell@176.249.240.35> has joined #yocto | 13:59 | |
*** JominlorTTT <JominlorTTT!6cab81a3@gateway/web/freenode/ip.108.171.129.163> has joined #yocto | 13:59 | |
sveinse | What do you guys do for configuring and package splitting? Please let me elaborate: | 14:01 |
*** rob_w <rob_w!~bob@unaffiliated/rob-w/x-1112029> has quit IRC | 14:02 | |
*** sgw_ <sgw_!~sgw_@134.134.139.82> has joined #yocto | 14:03 | |
sveinse | We have a large codebase which must be configured. This configuration decides what gets installed. When supporting multiple MACHINEs (same tune), I am torn between doing this selection in configure-time, so that I don't have to do FILES-splitting. But, that requires one build for each MACHINE. Or configuring it generic (to tune), but then I need to maintain package splitting during installation. | 14:05 |
jubr | sveinse: can't you split it into multiple recipes? Like qt-*? | 14:08 |
sveinse | jubr, I can. But then implies multiple compilations of the same code, doesn't it? | 14:08 |
sveinse | (without having looked at what qt-* does) | 14:08 |
*** belen <belen!~Adium@134.134.139.76> has quit IRC | 14:10 | |
jubr | Depending if everything needs to be compiled. We kept our codebase component-based, in separate executables actually, communicating locally. This actually makes for a robust, somewhat crash-resistant architecture. Building + Packaging can then be done separately as well. | 14:10 |
rburton | sveinse: i lean towards build-everything-one for the tune, split up in packaging, then each image or machine can pull in the packages that it needs | 14:11 |
*** belen <belen!~Adium@134.134.139.76> has joined #yocto | 14:11 | |
* jubr agrees | 14:11 | |
*** bottazzini <bottazzini!~realBigfo@192.55.54.42> has joined #yocto | 14:12 | |
sveinse | ...which actually makes the most sense, I think, so I concur as well. | 14:12 |
*** benjamirc1 <benjamirc1!~besquive@134.134.139.74> has joined #yocto | 14:13 | |
jubr | Do you use the opkg feeds to update in the field, or are all sw updates monolithic? | 14:13 |
sveinse | sw updates are monolithic. We tried in field package updates in our old OMAP based system, which turned out to be too slow. Nobody would wait 40 mins for a SW update. Monolithic took 5 mins | 14:15 |
sveinse | ubuntu deb bases thou. We're migrating to Yocto now | 14:15 |
jubr | We've been doing incremental going on 8 years now. Just adding monolithic actually, to add some flexibility wrt system/rootfs updates. | 14:20 |
jubr | The Qt pkg updates could be a bit slow though, true enough. | 14:21 |
sveinse | Yeah. Our storage system (sdcard) is painfully slow, so it didn't work out | 14:21 |
jubr | We have UBIFS on our old i.MX27 devices. Nightmare. | 14:22 |
sveinse | If I in my recipe want a feature flag, such as with-graphic, what is the common method of doing this in yocto? Can anyone point out an example, please? | 14:24 |
sveinse | I'm thinking if this setting belongs in local.conf or not | 14:24 |
sveinse | (and I have to learn this override stuff, since it translates to real changes in DEPENDS and RDEPENDS) | 14:25 |
rburton | sveinse: is that something that will vary by machine and won't be changed otherwise? | 14:25 |
sveinse | rburton: By role, actually. Which also implies machine. That is the "controller" role must have a display, so it only works on a specific machine with display. | 14:28 |
sveinse | I'm starting to think if I want something like "core-image-controller" | 14:29 |
sveinse | But that doesn't work too well with the TUNE thinking, as you'd have to configure Qt with graphics in both cases | 14:30 |
rburton | is qt not split into graphical and non-graphical bits anyway | 14:30 |
rburton | so even if you build all of qt5 but only link to the core, you don't pull in the graphical bits | 14:31 |
*** ziggo <ziggo!~ziggo@217.89.178.116> has quit IRC | 14:31 | |
sveinse | rburton: Yeah, but to build qt5 you need to build it with graphics, don't you? Even if you dont pull in the non gfx libs | 14:31 |
sveinse | So you end up having to supply some opengl lib for qt, which you eventually might not need | 14:32 |
sveinse | ergo, on a MACHINE that does not need gfx, you don't need to configure it with gfx at all | 14:32 |
rburton | but if you never install it and you'll be sharing from sstate anyway and you'll be building qt with graphics later, what's the problem? | 14:32 |
rburton | sure if you have a machine that never has graphics then say so in the machine config, and qt will never build with graphics | 14:33 |
rburton | well, tune. | 14:33 |
*** belen <belen!~Adium@134.134.139.76> has quit IRC | 14:34 | |
*** belen <belen!~Adium@134.134.139.76> has joined #yocto | 14:34 | |
sveinse | qt5 is built to tune it seems | 14:34 |
sveinse | which I tend to prefer, since its more generic | 14:35 |
rburton | yeah | 14:36 |
rburton | if you have various different configurations which have the same base hardware but different software, just build several different images | 14:36 |
*** adelcast <adelcast!~adelcast@130.164.62.126> has quit IRC | 14:36 | |
rburton | and hopefully the pieces you use can build modular enough | 14:36 |
jubr | I think you can choose to build parts of Qt (qt-base) without the graphical stuff (qt-declarative) | 14:38 |
sveinse | So I have two machines, two HWs, "A" and "B". Same tune. The former with display, the latter without. The main application is built off the same codebase. Now my job is to test out a new MACHINE/BSP for "B", but I don't want to bother getting gfx up and running on it, so I need somehow to allow my recipe to not take use of any gfx | 14:38 |
rburton | are we talking opengl or x11 or something else | 14:38 |
sveinse | qt5 with opengl | 14:39 |
sveinse | imx6 based | 14:39 |
sveinse | I just remembered seeing that qt5 has some feature selector system when configuring. I'll take a look at it | 14:41 |
present | packageconfig variable | 14:42 |
*** Guest70483 <Guest70483!~blitz@p5796D9E5.dip0.t-ipconnect.de> has joined #yocto | 14:44 | |
Guest70483 | hi everyone | 14:44 |
rburton | sveinse: you have to chose if you want to share qt5 between tunes or not. | 14:44 |
sveinse | rburton: Yes I do. | 14:44 |
rburton | sveinse: if you don't then make it machine-specific and remove opengl/x11/etc from the machine with no GL. if you do then you have to do a build with graphics. | 14:44 |
sveinse | I can tag on to something like this from meta-qt5: PACKAGECONFIG_GL ?= "${@base_contains('DISTRO_FEATURES', 'opengl', 'gl', '', d)}" | 14:45 |
rburton | yes but thats tune-wide | 14:45 |
sveinse | ah, right | 14:45 |
rburton | you could use a different distro for controller and worker, but then you'll likely be causing more builds | 14:46 |
Guest70483 | got a problem building an image for the raspberry pi 3. i always get this error: "/poky-krogoth-15.0.0/meta-openembedded/meta-python/recipes-devtools/python/python-dbus_1.2.0.bb, do_configure) failed with exit code '1'" and "configure: error: could not find Python headers". any hints? | 14:46 |
boucman_work | sveinse: I tend to think that if the machines have any difference, then they are not the same MACHINE (they will still share everything that is not MACHINE specific) | 14:47 |
*** Anticom <Anticom!~timo.m@217.6.33.234> has joined #yocto | 14:47 | |
jubr | sveinse: Maybe something like DISTRO_FEATURES_remove_machineB = "opengl" ? | 14:47 |
boucman_work | jubr: DISTRO_FEATURES are not supposed to change from machine to machine... that would cause everything to rebuild every time you switch machine | 14:48 |
jubr | Hmm.. good point | 14:48 |
jubr | sveinse: does the qt + gfx build ok for machine B? | 14:50 |
sveinse | I'm thinking loud here: "A" and "B" as we have it today is two different MACHINEs, but evidently the same tune, as most code is built to tune. including Qt. So moving to variscite imx for "B", would imply new MACHINE, but quite possibly a new tune as well. So there is no benefits for code reuse there. | 14:50 |
sveinse | jubr: It does for old "B" as its the same tune as "A". And qt + gfx apparently belongs to tune. But I don't know for new "B" | 14:51 |
boucman_work | would it be a different tune ? tune is usually linked to cpu type, and both are imx6 iiuc | 14:51 |
sveinse | I don't now yet. It's a completely different yocto setup and BSP, so I have no idea | 14:52 |
boucman_work | k | 14:53 |
*** jku <jku!~jku@dyj170ycrv18---3wlh9y-3.rev.dnainternet.fi> has joined #yocto | 14:59 | |
sveinse | I just need to comment out the lines with sdl in local.conf on older yocto releases to get compilation going on ubuntu 16.04? | 15:03 |
*** adelcast <adelcast!~adelcast@130.164.62.126> has joined #yocto | 15:04 | |
sveinse | Just downloaded variscite's current yocto release... | 15:04 |
*** Guest70483 <Guest70483!~blitz@p5796D9E5.dip0.t-ipconnect.de> has quit IRC | 15:04 | |
rburton | sveinse: on any release, assuming it supports libsdl-native. | 15:04 |
*** BlitzBlizz <BlitzBlizz!~blitz@p5796D9E5.dip0.t-ipconnect.de> has joined #yocto | 15:06 | |
sveinse | well it failed. complains "Install SDL devel", yet it is. Similar to what we talked about previously. Just this time around, I don't care about sdl if I can avoid it. (Don't know the implication of it thou) | 15:06 |
BlitzBlizz | hi guys | 15:06 |
istarilucky | What is the correct approach to do a bootable image with a rootfs greater than 4GB? | 15:07 |
*** Anticom <Anticom!~timo.m@217.6.33.234> has quit IRC | 15:08 | |
BlitzBlizz | i've got a problem building an image. i always got the error that the python headers couldnt be found. | 15:08 |
boucman_work | sveinse: what version of yocto is variscite based on ? | 15:08 |
sveinse | boucman_work: how can I find out? | 15:09 |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 15:10 | |
BlitzBlizz | any clues? | 15:11 |
*** sameo <sameo!samuel@nat/intel/x-dgizzzzkumbikykh> has quit IRC | 15:14 | |
boucman_work | sveinse: when bitbake run, it usually show what branches were actually cloned, usually they are using the yocto version name | 15:14 |
*** billr <billr!~wcrandle@134.134.139.82> has joined #yocto | 15:16 | |
sveinse | boucman_work: It does not. It somewhere around 2.0 or 2.0.1 I think. Strange thing is I find the 2.0 tag in both (variscite tree and vanilla poky), but after this they vary a lot. I find /some/ patches from vanilla poky in the variscite tree. (Hmm, need to familiarize myself better with git it seems) | 15:18 |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has quit IRC | 15:19 | |
*** fl0v0 <fl0v0!~fvo@pD9F6B30B.dip0.t-ipconnect.de> has joined #yocto | 15:23 | |
*** psadro <psadro!~Thunderbi@2620:0:ed0:800a:72f3:95ff:fe1d:9866> has quit IRC | 15:25 | |
*** jku <jku!~jku@dyj170ycrv18---3wlh9y-3.rev.dnainternet.fi> has quit IRC | 15:26 | |
rburton | sveinse: can you not try rebasing the vendor branch on top of 2.0.2? | 15:26 |
*** mortderire <mortderire!rkinsell@nat/intel/x-jrztclznfjwlfkrn> has quit IRC | 15:28 | |
*** sno <sno!~sno@62.157.143.22> has quit IRC | 15:30 | |
BlitzBlizz | oO | 15:30 |
*** florian <florian!~fuchs@Maemo/community/contributor/florian> has quit IRC | 15:32 | |
*** qt-x <qt-x!~Thunderbi@217.10.196.2> has quit IRC | 15:34 | |
*** ntl <ntl!~nathanl@cpe-24-242-74-130.austin.res.rr.com> has joined #yocto | 15:39 | |
*** boucman_work <boucman_work!~boucman@229.29.205.77.rev.sfr.net> has quit IRC | 15:40 | |
*** RagBal <RagBal!~RagBal@82-168-15-181.ip.open.net> has quit IRC | 15:42 | |
*** Rootert <Rootert!~Rootert@82-168-15-181.ip.open.net> has quit IRC | 15:42 | |
*** jkroon_ <jkroon_!~jkroon@fw.mikrodidakt.se> has quit IRC | 15:45 | |
*** paulg <paulg!~paulg@128.224.252.2> has quit IRC | 15:49 | |
*** psadro <psadro!~Thunderbi@2620:0:ed0:800a:72f3:95ff:fe1d:9866> has joined #yocto | 15:50 | |
*** csanchezdll <csanchezdll!~user@galileo.kdpof.com> has left #yocto | 15:50 | |
*** tjamison <tjamison!~tjamison@38.104.105.146> has joined #yocto | 15:51 | |
*** nillerbrun <nillerbrun!~nathani@mail.validmanufacturing.com> has quit IRC | 16:10 | |
*** Amynka <Amynka!~frozen@gentoo/developer/amynka> has joined #yocto | 16:14 | |
*** rajm <rajm!~robertmar@82-70-136-246.dsl.in-addr.zen.co.uk> has quit IRC | 16:18 | |
*** hamis_lt_u <hamis_lt_u!~irfan@110.93.212.98> has quit IRC | 16:26 | |
*** mortderire <mortderire!rkinsell@nat/intel/x-rzyouutfgtjzsrpw> has joined #yocto | 16:28 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has quit IRC | 16:29 | |
*** billr <billr!~wcrandle@134.134.139.82> has quit IRC | 16:30 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has joined #yocto | 16:31 | |
*** fl0v0 <fl0v0!~fvo@pD9F6B30B.dip0.t-ipconnect.de> has quit IRC | 16:34 | |
*** sno <sno!~sno@89.204.130.138> has joined #yocto | 16:40 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has quit IRC | 16:47 | |
*** billr <billr!~wcrandle@134.134.139.83> has joined #yocto | 16:48 | |
*** t0mmy_ <t0mmy_!~tprrt@217.114.201.133> has quit IRC | 16:51 | |
*** yann <yann!~yann@85-171-21-92.rev.numericable.fr> has quit IRC | 16:54 | |
*** present <present!~present@46.218.87.184> has quit IRC | 17:00 | |
*** justanotherboy <justanotherboy!mlopezva@nat/intel/x-imaybtjfnkrtxvnl> has joined #yocto | 17:00 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has joined #yocto | 17:00 | |
*** justanotherboy1 <justanotherboy1!~mlopezva@134.134.139.83> has quit IRC | 17:01 | |
*** Nilesh_ <Nilesh_!uid116340@gateway/web/irccloud.com/x-roopprsltaobavum> has joined #yocto | 17:01 | |
*** belen <belen!~Adium@134.134.139.76> has quit IRC | 17:01 | |
*** mortderire <mortderire!rkinsell@nat/intel/x-rzyouutfgtjzsrpw> has quit IRC | 17:05 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has quit IRC | 17:05 | |
*** MWelchUK <MWelchUK!~martyn@host81-135-119-51.range81-135.btcentralplus.com> has quit IRC | 17:06 | |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has joined #yocto | 17:18 | |
*** MWelchUK <MWelchUK!~martyn@host109-145-193-171.range109-145.btcentralplus.com> has joined #yocto | 17:19 | |
*** [Sno] <[Sno]!~sno@89.204.137.148> has joined #yocto | 17:24 | |
*** sno <sno!~sno@89.204.130.138> has quit IRC | 17:25 | |
*** rob_w <rob_w!~rob@unaffiliated/rob-w/x-1112029> has joined #yocto | 17:31 | |
*** toscalix <toscalix!~toscalix@80.91.95.202> has quit IRC | 17:32 | |
*** [Sno] <[Sno]!~sno@89.204.137.148> has quit IRC | 17:32 | |
*** justanotherboy <justanotherboy!mlopezva@nat/intel/x-imaybtjfnkrtxvnl> has quit IRC | 17:34 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 17:36 | |
*** paulg <paulg!~paulg@70.52.193.89> has joined #yocto | 17:41 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 17:44 | |
*** gtristan <gtristan!~tristanva@82-70-136-246.dsl.in-addr.zen.co.uk> has quit IRC | 17:44 | |
*** justanotherboy <justanotherboy!mlopezva@nat/intel/x-fpsioacadeqkdlwx> has joined #yocto | 17:47 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 17:47 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 17:48 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 17:52 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 17:53 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 17:56 | |
*** yann <yann!~yann@nan92-1-81-57-214-146.fbx.proxad.net> has joined #yocto | 18:03 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 18:03 | |
*** khem` <khem`!~khem@unaffiliated/khem> has joined #yocto | 18:06 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 18:07 | |
*** khem` <khem`!~khem@unaffiliated/khem> has quit IRC | 18:08 | |
*** khem` <khem`!~khem@unaffiliated/khem> has joined #yocto | 18:09 | |
*** jwessel <jwessel!~jwessel@128.224.252.2> has quit IRC | 18:10 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has quit IRC | 18:11 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 18:13 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 18:17 | |
*** sno <sno!~sno@PR04.hotspot.koeln> has joined #yocto | 18:24 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has quit IRC | 18:32 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 18:37 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has joined #yocto | 18:40 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 18:40 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 18:45 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 18:48 | |
*** Ulfalizer <Ulfalizer!~Ulfalizer@ip5f5bffc3.dynamic.kabel-deutschland.de> has joined #yocto | 18:51 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 18:51 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has quit IRC | 18:57 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:02 | |
*** townxelliot <townxelliot!~ell@176.249.240.35> has quit IRC | 19:03 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:05 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:07 | |
*** rob_w <rob_w!~rob@unaffiliated/rob-w/x-1112029> has quit IRC | 19:09 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:10 | |
*** challinan <challinan!~chris@2601:702:c100:8be0:ec69:cc33:dd0e:7741> has quit IRC | 19:10 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:11 | |
*** crankslider <crankslider!~slidercra@unaffiliated/slidercrank> has joined #yocto | 19:12 | |
*** khem` <khem`!~khem@unaffiliated/khem> has quit IRC | 19:13 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:14 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:16 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has quit IRC | 19:18 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:19 | |
*** bottazzini <bottazzini!~realBigfo@192.55.54.42> has quit IRC | 19:20 | |
*** pohly <pohly!~pohly@p57A5603A.dip0.t-ipconnect.de> has quit IRC | 19:20 | |
*** bottazzini <bottazzini!~realBigfo@192.55.54.42> has joined #yocto | 19:20 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:25 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has joined #yocto | 19:26 | |
*** pohly <pohly!~pohly@p57A5603A.dip0.t-ipconnect.de> has joined #yocto | 19:27 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has quit IRC | 19:28 | |
*** sno <sno!~sno@PR04.hotspot.koeln> has quit IRC | 19:28 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:28 | |
*** blueness <blueness!~blueness@gentoo/developer/blueness> has joined #yocto | 19:32 | |
*** billr <billr!~wcrandle@134.134.139.83> has quit IRC | 19:35 | |
*** aehs29 <aehs29!~aehernan@134.134.139.82> has joined #yocto | 19:37 | |
*** pohly <pohly!~pohly@p57A5603A.dip0.t-ipconnect.de> has quit IRC | 19:38 | |
*** sno <sno!~sno@PR04.hotspot.koeln> has joined #yocto | 19:39 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 19:52 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:53 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has quit IRC | 19:54 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 19:56 | |
*** Nilesh_ <Nilesh_!uid116340@gateway/web/irccloud.com/x-roopprsltaobavum> has quit IRC | 19:57 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 19:58 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 20:01 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 20:03 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 20:07 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 20:08 | |
*** eraineri <eraineri!~eraineri@cpe-70-119-111-146.tx.res.rr.com> has joined #yocto | 20:10 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 20:11 | |
*** paulg <paulg!~paulg@70.52.193.89> has quit IRC | 20:13 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 20:17 | |
*** fledermaus <fledermaus!~vivek@2a00:1098:5:0:ccb0:826a:ef2d:64d1> has quit IRC | 20:19 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 20:19 | |
*** bluelightning <bluelightning!~paul@pdpc/supporter/professional/bluelightning> has joined #yocto | 20:21 | |
*** marka <marka!~marka@128.224.252.2> has quit IRC | 20:22 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 20:22 | |
*** jwessel <jwessel!~jwessel@128.224.252.2> has joined #yocto | 20:26 | |
*** sno <sno!~sno@PR04.hotspot.koeln> has quit IRC | 20:29 | |
*** Crofton <Crofton!~balister@fw.whitepine.k12.nv.us> has joined #yocto | 20:29 | |
*** obsrwr <obsrwr!~otp-amois@catv-78-139-0-146.catv.broadband.hu> has quit IRC | 20:35 | |
*** anselmolsm <anselmolsm!~anselmols@192.55.55.41> has quit IRC | 20:42 | |
*** anselmolsm <anselmolsm!~anselmols@192.55.55.41> has joined #yocto | 20:43 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 20:47 | |
*** JaMa <JaMa!~martin@ip-89-176-104-169.net.upcbroadband.cz> has quit IRC | 20:47 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 20:50 | |
*** dmoseley <dmoseley!~dmoseley@6532158hfc157.tampabay.res.rr.com> has quit IRC | 20:51 | |
*** Crofton <Crofton!~balister@fw.whitepine.k12.nv.us> has quit IRC | 20:54 | |
*** khem <khem!~khem@unaffiliated/khem> has quit IRC | 20:55 | |
*** ntl <ntl!~nathanl@cpe-24-242-74-130.austin.res.rr.com> has quit IRC | 20:55 | |
*** khem <khem!~khem@unaffiliated/khem> has joined #yocto | 20:56 | |
*** sameo <sameo!~samuel@192.55.55.39> has joined #yocto | 21:00 | |
Ulfalizer | how can basing WORKDIR on MULTIMACH_TARGET_SYS be safe? MULTIMACH_TARGET_SYS only seems to include the "type" of the machine (PACKAGE_ARCH, TARGET_VENDOR, TARGET_OS). what if some recipe overrides variables based on the specific MACHINE? wouldn't that cause collisions, since MACHINE isn't included in MULTIMACH_TARGET_SYS? | 21:03 |
bluelightning | Ulfalizer: that makes the recipe machine-specific, which changes MACHINE_ARCH and therefore PACKAGE_ARCH | 21:04 |
bluelightning | well, it may change them automatically under certain circumstances, but you should really explicitly set PACKAGE_ARCH = "${MACHINE_ARCH}" in the recipe if you know you're making it machine-specific | 21:05 |
*** khem <khem!~khem@unaffiliated/khem> has quit IRC | 21:05 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:06 | |
Ulfalizer | what if you have two different machines with the same MACHINE_ARCH? is that supposed to never happen? | 21:07 |
*** cference <cference!~cference@64.187.189.2> has quit IRC | 21:07 | |
Ulfalizer | bluelightning: what i'm *really* wondering though is whether the descriptions in the last comment of https://bugzilla.yoctoproject.org/show_bug.cgi?id=9988 are accurate :) | 21:08 |
yocti | Bug 9988: enhancement, Medium, 2.2 M3, srifenbark, IN PROGRESS REVIEW , Suggested clarification of the STAGING_DIR_* glossary entries | 21:08 |
*** khem <khem!~khem@unaffiliated/khem> has joined #yocto | 21:09 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:10 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:11 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has quit IRC | 21:11 | |
*** Crofton <Crofton!~balister@fw.whitepine.k12.nv.us> has joined #yocto | 21:11 | |
bluelightning | Ulfalizer: looks fine to me | 21:11 |
*** sno <sno!~sno@b2b-78-94-80-58.unitymedia.biz> has joined #yocto | 21:11 | |
Ulfalizer | great, thanks! | 21:12 |
bluelightning | IMO the PACKAGE_ARCH / MACHINE_ARCH discussion doesn't really belong at this level, because that happens via PACKAGE_ARCH | 21:12 |
Ulfalizer | wasn't going to add anything to the glossary entries re. that. i was just curious. | 21:13 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:14 | |
Ulfalizer | bluelightning: would it still be good if i changed "uniquely identifies the type of the target system" to "uniquely identifies the target system"? that's a bit more concrete. i only wrote "type of" because i wasn't sure if the same MULTIMACH_TARGET_SYS value could cover multiple machines. | 21:14 |
bluelightning | well, it does often cover multiple machines - in fact the default is for it to be architecture-specific not machine-specific | 21:15 |
bluelightning | FWIW, I think that change will be too subtle for most readers to make a distinction in any case | 21:16 |
Ulfalizer | in that case i'm probably still missing something. if two packages use the same architecture but specialize on the machine, then it seems it'd collide. | 21:16 |
*** berton <berton!~fabio@177.100.227.79> has quit IRC | 21:17 | |
Ulfalizer | but maybe you can't "specialize on the machine"... | 21:17 |
Ulfalizer | in a way that makes sense here at least | 21:17 |
Ulfalizer | ah well, i'm happy as long as the current description makes sense | 21:17 |
-YoctoAutoBuilder- build #608 of nightly-oe-selftest is complete: Failure [failed Running oe-selftest] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-oe-selftest/builds/608 | 21:18 | |
-YoctoAutoBuilder- build #587 of nightly-world-lsb is complete: Failure [failed BuildImages] Build details are at http://autobuilder.yoctoproject.org/main/builders/nightly-world-lsb/builds/587 | 21:19 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 21:19 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has joined #yocto | 21:19 | |
bluelightning | Ulfalizer: using the same PACKAGE_ARCH and then customising per machine will break things in a multi-machine build and shouldn't be done (assuming it does not trigger PACKAGE_ARCH to change to ${MACHINE_ARCH} automatically, which it will under some circumstances) | 21:20 |
Ulfalizer | bluelightning: alright... then i think i'm following how it works at least. thanks! | 21:22 |
BlitzBlizz | hi guys | 21:22 |
Ulfalizer | hello | 21:22 |
BlitzBlizz | i've got a problem with building an image for the rpi 3. every build stops with the error buildDir/poky-krogoth-15.0.0/meta-openembedded/meta-python/recipes-devtools/python/python-dbus_1.2.0.bb, do_configure) failed with exit code '1' | 21:24 |
BlitzBlizz | can anyone help me? | 21:24 |
Ulfalizer | BlitzBlizz: do you get any other error messages? | 21:26 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:26 | |
moto-timo | which host? | 21:26 |
BlitzBlizz | configure: error: could not find Python headers | 21:26 |
BlitzBlizz | debian 8 virtual machine | 21:26 |
moto-timo | I saw the same error while build testing for 2.1.1 fix for Fedora-24 | 21:27 |
moto-timo | but I did not investigate any further | 21:27 |
BlitzBlizz | i sucessfully built a image before and after that i added meta-networking and meta-python to the bblayers.conf because i wanted a proftpd and since then this error pops up | 21:28 |
moto-timo | did you checkout the "krogoth" branch of meta-openembedded? | 21:29 |
BlitzBlizz | yes | 21:29 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:29 | |
moto-timo | can you pastebin the log? | 21:30 |
BlitzBlizz | this branch https://github.com/openembedded/meta-openembedded.git | 21:30 |
*** tlwoerner <tlwoerner!~trevor@unaffiliated/tlwoerner> has quit IRC | 21:30 | |
*** sgw_ <sgw_!~sgw_@134.134.139.82> has quit IRC | 21:30 | |
moto-timo | that's the repo, but you need https://github.com/openembedded/meta-openembedded/tree/krogoth | 21:31 |
moto-timo | but I need to see the output of the config log to help any further | 21:32 |
BlitzBlizz | here is the pastebin | 21:33 |
BlitzBlizz | http://pastebin.com/m6ge5fJe | 21:33 |
BlitzBlizz | i just wanted to built in a ftp server. | 21:35 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:35 | |
*** khem` <khem`!~khem@unaffiliated/khem> has joined #yocto | 21:36 | |
Ulfalizer | BlitzBlizz: try adding 'export BUILD_SYS' and 'export HOST_SYS' to the recipe. dunno if it's the proper solution, but it might fix it. | 21:37 |
moto-timo | jinx | 21:37 |
moto-timo | That's the exact same failure I saw and the same solution I thought of trying (but didn't) | 21:37 |
BlitzBlizz | which recipe? | 21:38 |
Ulfalizer | BlitzBlizz: python-dbus_1.2.0.bb | 21:38 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:39 | |
BlitzBlizz | is it possible to exclude python-dbus? | 21:39 |
Ulfalizer | if you search the pastebin for "Error:", you'll see that some os.getenv()'s are failing, which in turn causes a .replace() to fail | 21:39 |
Ulfalizer | they return None, and that makes .replace() unhappy | 21:39 |
BlitzBlizz | yes i saw this | 21:39 |
Ulfalizer | BlitzBlizz: not sure | 21:40 |
*** dv_ <dv_!~quassel@62.178.118.86> has quit IRC | 21:40 | |
* Ulfalizer found https://lists.yoctoproject.org/pipermail/yocto/2014-May/019863.html as well | 21:40 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:40 | |
moto-timo | https://github.com/openembedded/openembedded-core/blob/krogoth/meta/recipes-devtools/python/python-dbus_1.2.0.bb | 21:40 |
*** dv_ <dv_!~quassel@62-178-118-86.cable.dynamic.surfer.at> has joined #yocto | 21:40 | |
moto-timo | has the exact fix Ulfalizer mentioned. | 21:40 |
moto-timo | so you can PNBLACKLIST the one from meta-python temporarily | 21:41 |
moto-timo | (and we should probably consider dropping the one from meta-python) | 21:41 |
BlitzBlizz | ok. i will give it a try | 21:42 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:43 | |
moto-timo | I know the one from core built ok for beaglebone machine | 21:43 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 21:45 | |
BlitzBlizz | do i have to built a completely new image? (fyi: i'm new to the whole embedded/yocto thing) | 21:45 |
moto-timo | bitbake is smart enough to carry on where it left off | 21:46 |
BlitzBlizz | i did this and the same error appears | 21:46 |
*** belen <belen!~Adium@134.134.139.76> has joined #yocto | 21:46 | |
BlitzBlizz | i just moved the original .bb file and used the one from the link above | 21:47 |
*** khem` <khem`!~khem@unaffiliated/khem> has quit IRC | 21:47 | |
Ulfalizer | where did you move it? | 21:47 |
BlitzBlizz | to bb.bakcup | 21:47 |
BlitzBlizz | renamed | 21:48 |
*** adelcast <adelcast!~adelcast@130.164.62.126> has quit IRC | 21:48 | |
Ulfalizer | ok... try running bitbake python-dbus -c cleansstate and then try rebuilding again | 21:48 |
Ulfalizer | not great if that's needed though | 21:48 |
*** adelcast <adelcast!~adelcast@130.164.62.126> has joined #yocto | 21:48 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 21:48 | |
*** dmoseley <dmoseley!~dmoseley@6532158hfc157.tampabay.res.rr.com> has joined #yocto | 21:48 | |
Ulfalizer | maybe adding exports is an obscure-enough case | 21:48 |
BlitzBlizz | error persists | 21:49 |
Ulfalizer | BlitzBlizz: try adding random junk to the bb file to make sure you didn't mess up :) | 21:49 |
Ulfalizer | as a sanity check | 21:50 |
Ulfalizer | to see that it's getting used | 21:50 |
moto-timo | (or delete tmp and rebuild from sstate) | 21:50 |
moto-timo | but that's a heavier hammer | 21:50 |
BlitzBlizz | this takes 3 hours to rebuild :-) | 21:51 |
Ulfalizer | BlitzBlizz: try running just bitbake python-dbus | 21:51 |
moto-timo | not with sstate it won't | 21:51 |
moto-timo | double negative. my grammar has gone south | 21:52 |
BlitzBlizz | sanity check successfull | 21:52 |
BlitzBlizz | parse error | 21:52 |
moto-timo | bitbake -c cleansstate python-dbus && bitbake python-dbus | 21:53 |
moto-timo | (after you fix the parse error) | 21:53 |
rburton | parse error implies you make a typo | 21:53 |
* rburton is done for drive-by contributions and passes it back to moto-timo :) | 21:53 | |
rburton | g'night :) | 21:53 |
moto-timo | lol | 21:53 |
rburton | moto-timo: good try with #99999 but i saw through you cunning plan | 21:54 |
moto-timo | g'night | 21:54 |
moto-timo | rburton: of course you did | 21:54 |
rburton | wish we had more to redistribute :( | 21:54 |
moto-timo | it was just for the laugh | 21:54 |
rburton | there should be ~3 in JF somewhere | 21:54 |
moto-timo | we need to kickstart another run | 21:54 |
rburton | i wonder if davest has some in a drawer somewhere | 21:54 |
rburton | yeah that would mean doing a 3d scan of a existing one | 21:55 |
rburton | the model is way lost | 21:55 |
BlitzBlizz | this "bitbake -c cleansstate python-dbus && bitbake python-dbus" produced the pastebin again | 21:55 |
*** challinan <challinan!~chris@173-10-226-189-BusName-WestFlorida.hfc.comcastbusiness.net> has joined #yocto | 21:55 | |
rburton | i may have "liberated" a copy of the entire openedhand git server before it got shut down post-acquisition and the model was never archived in the artwork repo :( | 21:56 |
moto-timo | bummer | 21:56 |
rburton | yeah | 21:56 |
Ulfalizer | BlitzBlizz: do you have some other python-dbus* bb files besides the python-dbus_1.2.0.bb one? | 21:58 |
Ulfalizer | and does that one have 'export BUILD_SYS' and 'export HOST_SYS' in it? | 21:58 |
moto-timo | ^^^ very important | 21:59 |
Ulfalizer | python-dbus_1.2.0.bb that is | 21:59 |
BlitzBlizz | only python-dbus_1.2.0.bb and python-dbus_1.2.0.bb.backup | 21:59 |
Ulfalizer | BlitzBlizz: and python-dbus_1.2.0.bb and python-dbus_1.2.0.bb.backup | 22:00 |
Ulfalizer | Day changed to 22 Jul 2016 | 22:00 |
Ulfalizer | wups | 22:00 |
rburton | pastebin the entire recipe and the whole cooker log i guess | 22:00 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 22:00 | |
Ulfalizer | BlitzBlizz: and have you double-checked that you have those two export's in it? | 22:00 |
Ulfalizer | and are you getting the exact same errors as before? | 22:01 |
*** bfederau <bfederau!~quassel@service.basyskom.com> has quit IRC | 22:01 | |
*** fmeerkoetter <fmeerkoetter!~quassel@service.basyskom.com> has quit IRC | 22:01 | |
*** bfederau <bfederau!~quassel@service.basyskom.com> has joined #yocto | 22:01 | |
*** fmeerkoetter <fmeerkoetter!~quassel@service.basyskom.com> has joined #yocto | 22:01 | |
BlitzBlizz | yes i checked it. export BUILD_SYS export HOST_SYS are in python-dbus_1.2.0.bb file | 22:02 |
Ulfalizer | BlitzBlizz: on separate lines? | 22:02 |
BlitzBlizz | yes | 22:02 |
Ulfalizer | ok... adunno then | 22:02 |
Ulfalizer | seems odd | 22:02 |
moto-timo | odd indeed | 22:02 |
BlitzBlizz | i made some little changes to the local.conf. i don't know if this can cause the error | 22:03 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 22:03 | |
*** rburton <rburton!~Adium@home.burtonini.com> has quit IRC | 22:04 | |
BlitzBlizz | i added "IMAGE_INSTALL_append = "proftpd" to the local.conf | 22:04 |
BlitzBlizz | is this the right way? | 22:05 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 22:05 | |
neverpanic | You're not even getting to the point where this matters when running bitbake python-dbus, but there should be a space before proftpd, since _append doesn't add one | 22:06 |
BlitzBlizz | bblayers.conf: http://pastebin.com/5uFNhuEB local.conf: http://pastebin.com/YN16naiv | 22:08 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 22:09 | |
*** evanmeagher <evanmeagher!~MongooseW@71.231.164.126> has quit IRC | 22:09 | |
moto-timo | that bblayers.conf is truncated | 22:10 |
moto-timo | (there's no closing ") | 22:11 |
moto-timo | so probably didn't catch it all when you copied | 22:11 |
*** lamego <lamego!~jose@134.134.139.77> has quit IRC | 22:11 | |
BlitzBlizz | this "bitbake -c cleansstate python-dbus && bitbake python-dbus" produces this: http://pastebin.com/SUwmJv0j | 22:13 |
BlitzBlizz | the closing " is there. didn't catched it | 22:13 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 22:16 | |
*** anselmolsm <anselmolsm!~anselmols@192.55.55.41> has quit IRC | 22:17 | |
*** anselmolsm <anselmolsm!~anselmols@192.55.55.41> has joined #yocto | 22:17 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 22:20 | |
Ulfalizer | BlitzBlizz: is /home/debian1/buildDir/poky-krogoth-15.0.0/meta-openembedded/meta-python/recipes-devtools/python/python-dbus_1.2.0.bb the file with the added exports? double-check. | 22:20 |
moto-timo | strange that it isn't showing meta-oe, meta-networking and meta-python in the build output... | 22:21 |
moto-timo | should be seeing: | 22:21 |
moto-timo | meta-oe | 22:21 |
moto-timo | meta-python | 22:21 |
moto-timo | meta-networking = "krogoth:247b1267bbe95719cd4877d2d3cfbaf2a2f4865a" | 22:21 |
moto-timo | y | 22:21 |
moto-timo | when's the last time you did a git pull on your repos? | 22:22 |
*** bottazzini <bottazzini!~realBigfo@192.55.54.42> has quit IRC | 22:23 | |
BlitzBlizz | it was there. i don't know where it is gone. i added the exports and now i think it's working | 22:24 |
BlitzBlizz | no error | 22:25 |
Ulfalizer | \o/ | 22:25 |
BlitzBlizz | great. thank you guys :-) | 22:26 |
Ulfalizer | np | 22:26 |
moto-timo | you're welcome | 22:26 |
BlitzBlizz | i got some noob question. do you have time to answer them? | 22:26 |
Ulfalizer | try and see. i might be going to bed at any moment though. :P | 22:26 |
moto-timo | we can try. Otherwise I can point you to the ddocs | 22:26 |
BlitzBlizz | ok | 22:27 |
BlitzBlizz | first of all this line i added to the local.conf to built the proftpd... is this the right way to add | 22:27 |
BlitzBlizz | ? | 22:27 |
moto-timo | yes | 22:27 |
BlitzBlizz | ok | 22:28 |
moto-timo | it is a right way. not the only way, but nothing wrong | 22:28 |
moto-timo | the space matters (" proftpd") | 22:28 |
BlitzBlizz | after a succesful build, i add e.g. i will also the vsftpd, how can i build a new image including all i got before and the new vsftpd without building all packages? one build takes me 4-5 hours | 22:29 |
moto-timo | see 5.2.1 in http://www.yoctoproject.org/docs/2.1/mega-manual/mega-manual.html | 22:29 |
moto-timo | as long as you don't change layers and a few other things, bitbake is smart enough to use what it can | 22:30 |
moto-timo | re-use | 22:30 |
moto-timo | the actual last steps of building the image will of course have to be done fresh | 22:31 |
BlitzBlizz | there was always an error if i built in the same directory. but i didn't know the error anymore | 22:31 |
*** paulg <paulg!~paulg@OTWAON23-3096772825.sdsl.bell.ca> has joined #yocto | 22:32 | |
Ulfalizer | BlitzBlizz: rebuilding without rebuilding everything is how most builds are done when you're working on stuff. it's well-supported. | 22:33 |
Ulfalizer | and yocto figures out by itself what needs to be redone | 22:33 |
BlitzBlizz | ok. i will try this after the current build. | 22:34 |
moto-timo | but if you add a new layer, or change branches in a layer, the metadata has changed and so a rebuild is going to happen | 22:35 |
moto-timo | if you are just changing what goes into an image in the existing build it shouldn't take as long | 22:35 |
BlitzBlizz | ok. i will not change that much. just want to transfer some data via ftp | 22:35 |
BlitzBlizz | another question. how can i change the keymap? | 22:36 |
*** igor2 <igor2!~igor@189.112.127.225> has quit IRC | 22:36 | |
*** benjamirc1 <benjamirc1!~besquive@134.134.139.74> has quit IRC | 22:37 | |
BlitzBlizz | from qwerty to qwertz | 22:37 |
moto-timo | 28.2 in the mega manual link above | 22:37 |
moto-timo | eh... not much detail | 22:38 |
*** istarilucky <istarilucky!~rlucca@177.159.144.73> has quit IRC | 22:38 | |
moto-timo | that's not something I've done so I'll leave it to somebody else to answer :) | 22:38 |
*** Biliogadafr <Biliogadafr!~pin@nat3-minsk-pool-46-53-182-183.telecom.by> has quit IRC | 22:39 | |
BlitzBlizz | who will answer this? :-) | 22:39 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 22:40 | |
moto-timo | here's an example: https://github.com/shr-distribution/meta-smartphone/tree/master/meta-nokia/recipes-bsp/keymaps | 22:43 |
BlitzBlizz | how can i exclude packages from being build? e.g. python-dbus. i didn't need this | 22:43 |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 22:43 | |
moto-timo | if it built, something needed it | 22:43 |
BlitzBlizz | ok | 22:43 |
moto-timo | bitbake does not build stuff it didn't need | 22:43 |
BlitzBlizz | ok | 22:44 |
moto-timo | that's why the DEPENDS and RDEPENDS are there | 22:44 |
moto-timo | in the recipes | 22:45 |
moto-timo | (build time and run-time respectively) | 22:45 |
moto-timo | there are a lot of good books on getting started with Yocto Project... | 22:45 |
BlitzBlizz | i spend the last 2 weeks with it. it's a really amazing project, but at the moment i just want to getting a simple image, which is capable of transfering data via ftp. it's for my bachelor thesis and it's not about building an yocto image :-) | 22:47 |
moto-timo | you could try a pre-built distribution then | 22:48 |
*** Ulfalizer <Ulfalizer!~Ulfalizer@ip5f5bffc3.dynamic.kabel-deutschland.de> has quit IRC | 22:49 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 22:49 | |
BlitzBlizz | the problem is, i need a working mptcp kernel and for yocto i found a good tutorial | 22:50 |
*** psadro <psadro!~Thunderbi@2620:0:ed0:800a:72f3:95ff:fe1d:9866> has quit IRC | 22:52 | |
*** psadro <psadro!~Thunderbi@2620:0:ed0:800a:72f3:95ff:fe1d:9866> has joined #yocto | 22:52 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 22:53 | |
moto-timo | well, it sounds like you are on your way to a working solution. I'm headed home. g'night. | 22:54 |
BlitzBlizz | thanks again. :-) and good night | 22:55 |
moto-timo | you're welcome | 22:55 |
*** sameo <sameo!~samuel@192.55.55.39> has quit IRC | 22:58 | |
*** BlitzBlizz <BlitzBlizz!~blitz@p5796D9E5.dip0.t-ipconnect.de> has quit IRC | 23:00 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 23:08 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 23:11 | |
*** aehs29 <aehs29!~aehernan@134.134.139.82> has left #yocto | 23:19 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 23:23 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 23:26 | |
*** Crofton <Crofton!~balister@fw.whitepine.k12.nv.us> has quit IRC | 23:40 | |
*** crankslider <crankslider!~slidercra@unaffiliated/slidercrank> has quit IRC | 23:41 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 23:42 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 23:45 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 23:46 | |
*** eraineri <eraineri!~eraineri@cpe-70-119-111-146.tx.res.rr.com> has quit IRC | 23:47 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 23:50 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has joined #yocto | 23:51 | |
*** sgw_ <sgw_!sgw_@nat/intel/x-jsbqzkgaexxixxyj> has joined #yocto | 23:52 | |
*** Gintaro <Gintaro!~gintaro@geertswei.nl> has quit IRC | 23:54 | |
*** challinan <challinan!~chris@173-10-226-189-BusName-WestFlorida.hfc.comcastbusiness.net> has quit IRC | 23:56 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!