I agree, I should have separated those questions out.<br><br><div class="gmail_quote">On Wed, Feb 15, 2012 at 8:40 PM, Mac Mollison <span dir="ltr"><<a href="mailto:mollison@cs.unc.edu">mollison@cs.unc.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I think there are really two separate questions here (please let us<br>
know, Jonathan, if you agree or not):<br>
<br>
(1) "Should we switch from sched-trace to the Linux kernel tracing<br>
infrastructure used by kerneltrace instead of maintaining both side by<br>
side?"<br>
<br>
That, I have no opinion on.<br>
<br>
FYI, unit-trace has little to no bearing on this decision, because it<br>
would be easy to write a new unit-trace frontend that can parse the<br>
same trace files as kernelshark. I wrote a new frontend to parse trace<br>
files from my userspace scheduler, and it didn't take long.<br>
<br>
(2) "Is it a good idea to be adding new visuazliation functionality to<br>
kernelshark instead of unit-trace? i.e. where do we want to spend our<br>
effort in terms of developing visualization tools?"<br>
<br>
I concur that it is worth trying to extend kernelshark. You're going<br>
to get much more bang for your buck that way, as opposed to working<br>
with the extremely obtuse unit-trace visualizer code.<br>
<br>
Just in case that ultimately proves to be problematic, you could always<br>
switch back to the unit-trace visualizer. By then there may be a new,<br>
maintainable, extensible unit-trace visualizer anyway, because I think<br>
I'll have to create something like that for my userspace scheduling<br>
work.<br>
<font color="#888888"><br>
- Mac<br>
</font><div><div></div><div class="h5"><br>
<br>
On Wed, 15 Feb 2012 18:36:15 -0500<br>
Jonathan Herman <<a href="mailto:hermanjl@cs.unc.edu">hermanjl@cs.unc.edu</a>> wrote:<br>
<br>
> This is really, really nice. I'll give it a couple days for everyone<br>
> to check it out then probably merge<br>
> into staging. It has inspired another question: should we move<br>
> sched_trace towards this infrastructure?<br>
><br>
> I need to add visualization for container scheduling into something<br>
> so that I can practically debug<br>
> my implementation. The unit-trace visualization code is a tad obtuse<br>
> and I was not looking forward to<br>
> adding container support. The code for kernelshark seems modularized<br>
> and slick. I would much rather<br>
> add code to this. I could add visualization for releases / deadlines /<br>
> blocking etc fairly easily.<br>
><br>
> Other / future work (Glenn's interrupts, Chris's memory management) on<br>
> litmus would benefit from an<br>
> easily extendable tracing framework. I don't want to extend<br>
> unit-trace if we'll have to abandon it for<br>
> tracepoints anyway.<br>
><br>
> Chris, Glenn, Mac, and I are pro abandoning unit-trace for kernel<br>
> visualization. Bjoern and Andrea, what do<br>
> you think about this? Going forward, I would see us dropping<br>
> unit-trace for kernel visualization, but could<br>
> we replace sched_trace entirely in the long term? Would we want to?<br>
><br>
> For those that didn't get a chance to play with it, this also supports<br>
> dynamically enabling / disabling events<br>
> as well as a task-centric view of system events, so that you can list<br>
> rt-spin processes and see how they are<br>
> behaving.<br>
><br>
> On Tue, Feb 14, 2012 at 2:59 PM, Andrea Bastoni <<a href="mailto:bastoni@cs.unc.edu">bastoni@cs.unc.edu</a>><br>
> wrote:<br>
><br>
> > On 02/14/2012 12:05 AM, Glenn Elliott wrote:<br>
> > ><br>
> > > On Feb 11, 2012, at 4:17 PM, Andrea Bastoni wrote:<br>
> > ><br>
> > >> Hi all,<br>
> > >><br>
> > >> I've managed to expand and polish a bit a patch that I've had<br>
> > >> around<br>
> > for a<br>
> > >> while. It basically enables the same sched_trace_XXX() functions<br>
> > >> that we currently use to trace scheduling events, but it does so<br>
> > >> using<br>
> > kernel-style<br>
> > >> events (/sys/kernel/debug/tracing/ etc.).<br>
> > >><br>
> > >> So, why another tracing infrastructure:<br>
> > >> - Litmus tracepoints can be recorded and analyzed together<br>
> > >> (single time reference) with all other kernel tracing events<br>
> > >> (e.g., sched:sched_switch, etc.). It's easier to correlate the<br>
> > >> effects of kernel events on litmus tasks.<br>
> > >><br>
> > >> - It enables a quick way to visualize and process schedule traces<br>
> > >> using trace-cmd utility and kernelshark visualizer.<br>
> > >> Kernelshark lacks unit-trace's schedule-correctness checks, but<br>
> > >> it enables a fast view of schedule traces and it has several<br>
> > >> filtering options (for all kernel events, not only Litmus').<br>
> > >><br>
> > >> Attached (I hope the ML won't filter images ;)) you can find the<br>
> > visualization<br>
> > >> of a simple set of rtspin tasks. Particularly, getting the trace<br>
> > >> of a<br>
> > single<br>
> > >> task is straightforward using trace-cmd:<br>
> > >><br>
> > >> # trace-cmd record -e sched:sched_switch -e litmus:* ./rtspin -p<br>
> > >> 0 50<br>
> > 100 2<br>
> > >><br>
> > >> and to visualize it:<br>
> > >><br>
> > >> # kernelshark trace.dat<br>
> > >><br>
> > >> trace-cmd can be fetch here:<br>
> > >><br>
> > >> git://<a href="http://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git" target="_blank">git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git</a><br>
> > >><br>
> > >> (kernelshark it's just the "make gui" of trace-cmd; trace-cmd and<br>
> > kernelshark<br>
> > >> have a lot more features than simple filtering and visualization;<br>
> > hopefully it<br>
> > >> should be a good help for debugging.)<br>
> > >><br>
> > >> The patch is on "wip-tracepoints" on main repository and jupiter.<br>
> > >><br>
> > >> Info on trace-cmd, kernelshark, and ftrace are available here:<br>
> > >><br>
> > >> <a href="http://lwn.net/Articles/341902/" target="_blank">http://lwn.net/Articles/341902/</a><br>
> > >> <a href="http://lwn.net/Articles/425583/" target="_blank">http://lwn.net/Articles/425583/</a><br>
> > >> <a href="http://rostedt.homelinux.com/kernelshark/" target="_blank">http://rostedt.homelinux.com/kernelshark/</a><br>
> > >> <a href="http://lwn.net/Articles/365835/" target="_blank">http://lwn.net/Articles/365835/</a><br>
> > >> <a href="http://lwn.net/Articles/366796/" target="_blank">http://lwn.net/Articles/366796/</a><br>
> > ><br>
> > ><br>
> > > I saw these tracing tools at RTLWS this year and thought it would<br>
> > > be<br>
> > nice to<br>
> > leverage the OS tracing and visualization tools. The validation<br>
> > methods of unit-trace are nice, but have fallen out of use.<br>
> > Unit-trace is mostly used for<br>
> > visual inspection/validation and I think kernelshark is probably<br>
> > more robust<br>
> > than unit-trace, right?<br>
> ><br>
> > Umm, I think the major strength of this approach is that it's<br>
> > easier to correlate (also visually) Linux tasks and Litmus tasks.<br>
> > It also enable a quick<br>
> > way to visualize schedule traces, but ATM:<br>
> ><br>
> > - unit-trace schedule plots are prettier! :)<br>
> > When you visualize plots with kernelshark you also get (if you don't<br>
> > disable<br>
> > them) all the "spam" from other events/tracing points.<br>
> ><br>
> > - unit-trace can automatically check for deadline misses<br>
> ><br>
> > > Questions:<br>
> > > (1) I guess this would completely remove the feather-trace<br>
> > under-pinnings to sched_trace in favor of this?<br>
> ><br>
> > Nope, as I said in a previous email, it adds to sched_trace_XXX().<br>
> > You can have<br>
> > both enabled, both disabled, or one enabled and the other disabled.<br>
> > The defines<br>
> > in [include/litmus/sched_trace.h] do the enable/disable trick.<br>
> ><br>
> > > (2) How might this affect the analysis tools we use in<br>
> > > sched_trace.git?<br>
> > Can<br>
> > > we merely update to new struct formats, or is it more complicated<br>
> > > than<br>
> > that?<br>
> ><br>
> > Umm, you're always more than welcome to update them if you want! :)<br>
> > I don't see<br>
> > problems in using both methods. It's always nice to have<br>
> > Litmus-only traces without all the spam that can be generated by<br>
> > kernel function tracers. (You can<br>
> > play with "./trace-cmd record -e all /bin/ls" to get an idea on how<br>
> > many events<br>
> > will be recorded... and you're just tracing events, not all the<br>
> > functions!)<br>
> ><br>
> > > (3) How big is the buffer used by the Linux tracing? Using<br>
> > > feather-trace-based tracing, I've seen dropped events in systems<br>
> > > that are temporarily overutilized. This is because ft-trace gets<br>
> > > starved for CPU time. I've made the sched_trace buffers huge to<br>
> > > counter this, but this<br>
> > "fix"<br>
> > > doesn't always work. Would Linux tracing make dropped events<br>
> > > more or<br>
> > less<br>
> > > likely? What recourse do we have if we find that events are being<br>
> > dropped?<br>
> ><br>
> > [snip]<br>
> > > Info on trace-cmd, kernelshark, and ftrace are available here:<br>
> > ><br>
> > [snip]<br>
> > > <a href="http://lwn.net/Articles/366796/" target="_blank">http://lwn.net/Articles/366796/</a><br>
> ><br>
> > buffer_size_kb; and perhaps starting/stopping the trace from the<br>
> > kernel may work.<br>
> ><br>
> > Thanks,<br>
> > - Andrea<br>
> ><br>
> ><br>
> > > -Glenn<br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > litmus-dev mailing list<br>
> > > <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
> > > <a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
> > ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > litmus-dev mailing list<br>
> > <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
> > <a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
> ><br>
><br>
><br>
><br>
<br>
_______________________________________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Jonathan Herman<br>Department of Computer Science at UNC Chapel Hill<br>