Can Anyone Help With A Quick App_record.c Module Improvement And Can Explain Over-riding Modules?

Home » Asterisk Users » Can Anyone Help With A Quick App_record.c Module Improvement And Can Explain Over-riding Modules?
Asterisk Users 7 Comments


I want to start recording with a prompt of “press or say 1 to 5”. If no DMTF is pressed, I want to send the recording to Google Speech to get the number back (got that part working already).

If any dtmf key is pressed while Application_Record is running with option y, then the recording terminates and sends RECORD_STATUS of “DTMF” (A terminating DTMF was received).

But I need to know **what** number that DTMF was, and I can’t see a way of grabbing it after the fact.

I can see in the code where the right variables are..
* \param dtmf_integer the integer value of the DTMF key received

So,3 questions I guess:

1: Am I going about this the right way? (unimrcp is not an option here)
2: Can someone explain in layman’s terms how a simpleton like me could copy, hack about with and make a new module, like, for example, app_record_alt.c, that would stick around each time I updated Asterisk from source?
3: Or, is anyone willing to make the simple code change to the file to improve it to send back the DTMF to the dialplan? For free to improve core code? If not, and I posted on the commercial list, how much would I be looking at to modify about 6 lines of code and return an extra variable?

So, ultimately, I’m hoping for something like:

option “y” returns a RECORD_STATUS of “DTMF” if a key was press

option “z” returns a RECORD_STATUS of showing which key was pressed. Or possibly even DTMF_VALUE (if an app can return two variables to the dialplan?)

I’m sure this would benefit a lot of people.

I posted this a few days ago in the forum at
but no-one bit, so, I’m hoping this list can help.

Many thanks!

7 thoughts on - Can Anyone Help With A Quick App_record.c Module Improvement And Can Explain Over-riding Modules?

  • Just a quick and dirty thought, try the MONITOR application.


    Anchor-point PLAYBACK (“press or say”)
    MONITOR (use the split audio files mode, not the mixed – this way you can roughly separate which side did the “talking”)
    READ (audio file “1 to 5”, try to grab one digit)
    IF (READ variable timed-out, send the incoming half of the monitor file to Google Speech)
    Playback (some sound effect to indicate “thinking” on the Asterisk side – user feedback is good)
    Check Google Speech result against a white-list
    IF filtered result was not a valid option
    PLAYBACK “I didn’t understand that”
    GOTO to Anchor-point
    Goto next step using valid decoded speech data ELSE
    Check DTMF result against a white-list
    IF filtered DTMFresult was not a valid option
    PLAYBACK “I didn’t understand that”
    GOTO to Anchor-point
    Goto next step using valid decoded DTMF data Catch-all, should never get here.


    Don’t forget to filter your user sourced data against your white-list, always assume users are hostile, this is part of the total picture of defence-in-depth.


  • Oh, what a good idea! That’s exactly the kind of lateral thinking I
    was hoping someone would come up with.

    I thought it was called MixMonitor, and tried to wrap my head around it but couldn’t.

    I’ll give this a go tomorrow and let you know what I come up with!

    Many thanks,



  • MixMonitor is related, but different (and as the name suggests, automatically mixes the two channels, so I think Tim’s suggestion to use Monitor is much better.

    Note that you may well need to use the ‘b’ option with Monitor, to make sure you can record when there’s no bridge between two channels.

    Please do report back – this is a useful feature.


  • Also, be aware that by creating an audio file, you may need to insert a pause in your code before the file is:

    1) written
    2) flushed from cache to disk
    3) registered as available to be opened by the OS

    I have seen this take over 2 seconds before on a sluggish machine. You can speed this up a bit by putting the recordings in a RAMDISK
    partition on the host – but be careful that you only use short recordings and clean them up after they are not needed any more.

    If that’s still not fast enough, there’s the Google Speech streaming API, but I’m not up to snuff on that – essentially you’d need the functionality of the monitor split to only stream the remote user’s voice, then you’d need to pipe that to a Google Speech API tunnel. That’s probably not something you can hack away at with simple Asterisk dialplan applications.



  • Thanks – my host uses SSD and everything seems pretty quick, but I’ll give it a 1 second pause.

    Funnily enough, I had just found an old reply from last year to another similar question:

    So I had a look and found this:

    And read this:

    There’s a few knowledge gaps, but I think with a few days reading and the great help here, we might have a solution ­čÖé

    This is all very helpful – if anyone else feels like wading in, please do.

    Many thanks!

  • Hello.

    A little sub from my dialplan:

    exten => s,1,NoOp(Read)
     same => n,Set(LOCAL(tmp_record_file)=/tmp/asterisk-in/${EPOCH})
     same => n,Monitor(wav16,${tmp_record_file},o)
     same => n,Read(tmp_ext,${ARG2},${ARG3},${ARG4},${ARG5},${ARG6})
     same => n,StopMonitor()
     same => n,NoOp(ReadStatus=${READSTATUS})
     same => n,Gotoif($[ ${LEN(${tmp_ext})} > 0 ]?end)
     same =>
    ┬ásame => n,NoOp(Voice recognition result: “${agi_result}”)
    ┬ásame => n,Gotoif($[ “${agi_result}” != “found” ]?end)
     same => n,Return(${agi_call_exten})
     same => n(end),return(${tmp_ext})

    21.01.2018 2:57, Jonathan H đ┐đŞĐłđÁĐé:

  • Hi Dmitry and Tim (and everyone else with input into this thread)

    Just wanted to thank you all; with your guidance, I’ve managed to bolt something very clean and efficient together using Dmity and Tim’s templates, piped into the ding-dong npm node package, which calls Google Speech API node package.

    What I particularly like about DingDong is that it’s well documented insofar as it’s so simple, it barely needs documentation!