Quick Status Update

It’s been a little while since I’ve posted any progress updates, so I figured I drop a quick note.

I’ve had other things competing for my free time over the last month, so I’m not as far along as I had hoped to be. There has, however, been some forward movement…

The CNC / Motion Control kernel module and DMA driver have been ported over and tested. I am now generating step/dir and laser control outputs from Glowforge motion (puls) files. Big shout out to @palmercr who helped me get past a pesky instrumentation issue that had me chasing my tail.

I’ve finished a quick and dirty driver for the Trinamic stepper drivers that allows me to set their parameters and read back status.

With these these two major components completed, I’ll be starting motion testing today. Yes, it took 3 months for me to get to the point where I can actually make stuff move. :roll_eyes:

The DMA component is probably the most interesting part of the project, so far (it’s how they are getting precise timing without using a real time kernel or external step generator). I’ll put together a technical description of these components when I get some more time.

Until then, all the code changes are up on the new Github repo.

6 Likes

Until you hit the 2 year delay mark you have nothing to apologise for

3 Likes

It is precisely on the 10kHz points with no jitter, but since it is sampling the original waveform at only 10kHz the stepper pulses have massive jitter and limited maximum frequency. A PC running LinuxCNC performs better and that has pretty high interrupt latency. It’s a crazy way to drive stepper motors IMHO.

The inefficiencies of the puls file aside, the hardware model seems to be fairly efficient.

It utilizes the on-chip Enhanced Peripheral Interrupt Timer (EPIT), which is unused by the Linux kernel, to control the timing of the pulses. Don’t let the fact that it has “Interrupt” in its name mislead you. In this case, the EPIT is not sending interrupts to the processor.

Instead, the EPIT is sending ‘events’ to the Smart Direct Memory Access (SDMA) engine. The SDMA is an on-chip processor that is usually used to offload data transfers between memory and peripherals, or directly between peripherals.

The STEP/DIR/POWER script running on the SDMA waits for an event signal from the EPIT (in this case, coming in at the rate of 10 kHz), then grabs a byte of the stepper waveform file from the in-memory FIFO. It sets/clears the outputs accordingly (using delay loops to ensure proper signal timing for the stepper drivers) and then returns to idle to wait for the next EPIT event.

It does all of this without any application processor intervention/overhead.

When it reaches the end of the data in the FIFO (or running the number of bytes it was told to run), only then does it trigger an interrupt that notifies the processor that it is done.

It can run a lot faster than 10 kHz (up to 2 MHz, if you believe the limits hard coded into the driver by Glowforge). I haven’t testing the higher speeds yet. The SDMA processor clock runs at 132 MHz, so it certainly will handle fairly high speeds - even at 256 microsteps.

My thinking is I’ll continuously feed the FIFO directly from a G Code processor.

There may still be some jitter, though. The weak link is the memory access. It comes from a shared bus. If there is bus contention, it may take extra clock cycles for the SDMA to retrieve the next byte. There is a small burst mode buffer that may mitigate this. I haven’t reached that point in testing yet to know if it will be a problem or not.

1 Like

Well, I am only one guy working a couple hours after dinner a few days a week. :smiley:

3 Likes

You also aren’t sitting on a pile of our cash. :grin:
It’s great you so enjoy this work, you have covered serious ground. Thanks for that!

2 Likes

The jitter comes from sampling the ideal stepper waveforms at a fixed frequency, quantising the gaps between steps to multiples of 100us. Although it has to be said that with a standard Bresenham motion system the slower axes all have jitter because they are effectively sampled at the frequency of the fastest axis.

1 Like

I’m speaking from the hardware perspective when I am talking about jitter. Specifically, the variations in timing of the pulses caused by latencies in the process that generates them.

From that perspective, the jitter appears to be very minimal, if even detectable.

From the sampling rate perspective, is there an easy way to calculate what effect the sample rate has on jitter?

Crap. Now I have more research to do…

The jitter due to sampling is +/- half the sample period except when the step rate is an exact divisor of the sample rate.

On my 3D printers the timer interrupts are at the rate of the fastest axis so the jitter is only the interrupt latency for that axis. The slower axes either step or not on each interrupt using the Bresenham algorithm so they can have significant jitter.

A better way is to have separate timers for each axis.

Are you continuously varying the timer based on the current speed of the axis?

There is a second EPIT available with its own dedicated event input to the SDMA. They’re fed with a 66 MHz clock, so they have up to 15 ns resolution. Though, since the SDMA scripts can only run one at a time using cooperative multi-tasking, I think using separate timers and scripts in this instance would cause more problems than it solves.

Yes. Discourse insists I say more!

1 Like

I fixed it. The minimum post is now 2 characters.

2 Likes