Skip
# Converting math to code

A Fourier transform (FT) of a function f(t) is an integral over all time of the function multiplied by a complex exponential. A fast Fourier transform (FFT) is a particular method for computing the discrete Fourier transform (DFT), which is the summation over all samples of the function multiplied by a complex exponential. If you've done FFTs/DFTs, you've done discrete-time integration!

From a computational perspective (not a rigorous perspective), an integral is just a sum. The bottom number on the integral (the s-symbol) is the "start" of the sum and the top number on the integral is the "end" of the sum. The differential (the d at the end of the integral) tells over what you are summing.

For instance, if you are integrating f(t)dt from t = 0 to t = infinity, the computational version of that is:

integral = 0;

dt = 0.001;

for(t = 0, t = t + dt, t < infinity)

integral = integral + f(t) * dt;

end for;

As dt gets smaller, you have a better approximation of what the summation of that function is. In particular, as dt approaches 0, you get exactly what the summation of that function is, with the slight disadvantage that the summation is no longer computationally feasible numerically (and hence has to be done analytically).

posted by saeculorum at 10:10 PM on July 3

Okay, this is just really basic notational stuff then. People probably over-recommend khan academy, but seriously, watch a few hours of his calculus videos and you'll easily understand notation at the very least. Being able to solve problems, etc, takes a lot of practice, but the notation part is easy.

I don't think, however, that it'll help you understand wikipedia articles very much, because they seem to be written at a graduate mathematics level for the most part, for reasons I'll never understand.

posted by empath at 1:14 AM on July 4

Post

# Converting math to code

July 3, 2014 2:46 PM Subscribe

Wanting to code. Can't read calculus. Kinda stuck. Help?

As I spend more and more time coding (just as a hobby, mostly in C and C++,) I keep running into a specific limitation of mine. I'm totally fine with the math, both conceptually and programmatically, but I can't understand much of the research that's in (what looks to me like) calculus. For example, the Wikipedia page that talks about signal filters talks about transfer functions, and points to the Wiki page on transfer functions. That page makes no sense to me.

But, when someone explains something like filters through non-calculus means (for example, this page uses English and flow charts), I can totally follow it and create code based on that explanation.

Some of the areas that I'm exploring aren't common enough to have English explanations of the calculus. So I'm stuck.

Is there a way to learn enough calculus to conceptually read it? I don't need to do any math in calculus. I just need to be able to understand what a function expects me to do with it. Basically, how to read a function so that I can implement the idea in code.

In case it isn't clear, I don't need code samples for filters. I have plenty, and that's not where I typically get stuck. I need resources on deciphering calculus functions.

As I spend more and more time coding (just as a hobby, mostly in C and C++,) I keep running into a specific limitation of mine. I'm totally fine with the math, both conceptually and programmatically, but I can't understand much of the research that's in (what looks to me like) calculus. For example, the Wikipedia page that talks about signal filters talks about transfer functions, and points to the Wiki page on transfer functions. That page makes no sense to me.

But, when someone explains something like filters through non-calculus means (for example, this page uses English and flow charts), I can totally follow it and create code based on that explanation.

Some of the areas that I'm exploring aren't common enough to have English explanations of the calculus. So I'm stuck.

Is there a way to learn enough calculus to conceptually read it? I don't need to do any math in calculus. I just need to be able to understand what a function expects me to do with it. Basically, how to read a function so that I can implement the idea in code.

In case it isn't clear, I don't need code samples for filters. I have plenty, and that's not where I typically get stuck. I need resources on deciphering calculus functions.

Translation from math to algorithm to code is not a simple task - and unfortunately involves a lot more than just understanding the math involved. A good book here might be "Numerical Recipes in C", it has an approachable style and a broad coverage of algorithms.

Generally you might look for texts in applied math or numerical methods. Both of these areas deal primarily with the "translation" of mathematics into algorithms which can be implemented on a computer.

posted by NoDef at 3:57 PM on July 3 [1 favorite]

Generally you might look for texts in applied math or numerical methods. Both of these areas deal primarily with the "translation" of mathematics into algorithms which can be implemented on a computer.

posted by NoDef at 3:57 PM on July 3 [1 favorite]

This probably isn't the answer you want... But I went through the same problem as you, a little big as a coder but more as an economist. The short answer is there are no short-cuts when it comes to math. It's not really possible to just learn the few parts you need now and then. Math builds on itself tremendously, and without a reasonable foundation to back up learning a new topic, you are usually screwed. The good news is that calc isn't that hard to become okay at if you're willing to dedicate some time. So go get a textbook and get the basics of integration!

posted by jjmoney at 4:16 PM on July 3

posted by jjmoney at 4:16 PM on July 3

The first time I read your question I thought you were just asking about notation - "I don't know what this funny-looking E symbol means (capital sigma), but when someone tells me to just add together all the values, I can do that."

But I'm thinking it's a bit deeper than that, in that if you don't have the mathematical concept down, it can be easy to make a mistake in implementation. The layman's intuition about a function from a verbal description won't always jive with the result of doing the actual calculation, especially if you're doing things like signal processing. Signal processing is pretty deep in the math since the only real way to get from time data to frequency data is a Fourier transform.

And NoDef is right that there's an extra step in going from an "analytical" result (x^2 + 2 e^(i*k*x)) to a result that can be implemented on a computer. One thing that might be useful, if the goal is to get past the sticky math bit and get the code implemented, would be a library of related functions. Fourier transforms and filters and so on. Then you can re-arrange them however you see fit without having to worry about how to get from the theory of a low-pass filter to the reality of time-domain data.

On the other hand if the goal is to get better on going through the sticky math bit and coming out with code on the other end, then you really are going to need to learn calculus. There's just not any consistent way that you can express the notation in some other form and reliably get the right answer regardless of what's inside the notation. The integral of (x^2) is x^3 / 3 but the integral of e^x is e^x - there's not anything I can tell you to do other than integrate that's going to give both of those results.

It's not that bad, especially if you have actual problems that you're trying to solve and can apply the math to that (rather than the typical 'math class' situation of needing to learn equations for their own sake). But I think those are your options - learn to do the calculus, or find someone else who has done the calculus for you.

posted by Lady Li at 4:31 PM on July 3 [1 favorite]

But I'm thinking it's a bit deeper than that, in that if you don't have the mathematical concept down, it can be easy to make a mistake in implementation. The layman's intuition about a function from a verbal description won't always jive with the result of doing the actual calculation, especially if you're doing things like signal processing. Signal processing is pretty deep in the math since the only real way to get from time data to frequency data is a Fourier transform.

And NoDef is right that there's an extra step in going from an "analytical" result (x^2 + 2 e^(i*k*x)) to a result that can be implemented on a computer. One thing that might be useful, if the goal is to get past the sticky math bit and get the code implemented, would be a library of related functions. Fourier transforms and filters and so on. Then you can re-arrange them however you see fit without having to worry about how to get from the theory of a low-pass filter to the reality of time-domain data.

On the other hand if the goal is to get better on going through the sticky math bit and coming out with code on the other end, then you really are going to need to learn calculus. There's just not any consistent way that you can express the notation in some other form and reliably get the right answer regardless of what's inside the notation. The integral of (x^2) is x^3 / 3 but the integral of e^x is e^x - there's not anything I can tell you to do other than integrate that's going to give both of those results.

It's not that bad, especially if you have actual problems that you're trying to solve and can apply the math to that (rather than the typical 'math class' situation of needing to learn equations for their own sake). But I think those are your options - learn to do the calculus, or find someone else who has done the calculus for you.

posted by Lady Li at 4:31 PM on July 3 [1 favorite]

Thanks for the answers so far. Judging by the responses, I think I haven't adequately explained where my roadblock is.

Maybe saying it another way, when reading a research paper, there's text which explains the concepts, which I can generally follow. They tend to do something like, "[concept], [concept], [concept], and this is expressed as [equation]." I can follow the concepts. I just can't read the equation.

When I say that I don't need to be able to "do the math," I think I'm just saying that I don't need to be able to independently derive the equations. (I might at some point, when I get more involved, but not now.) I just want to be able to read the equations.

Given this, it may still be much harder than I think. I can accept that as an answer, in which case, I'll buckle down and start learning calc. But I wasn't sure if my question was really clear, given the responses.

Thanks!!!

posted by ericc at 6:08 PM on July 3

**saeculorum**: I understand filters. I understand the difference between frequency and time domains. I've written my own FFT classes. I'm happily doing forward and inverse transformations. I get it. (Not to say that I'm any expert, but I totally get the concept, and quite a few details.) This question wasn't about filters at all - I just used filters as an example, since they're common to many applications. But this is a good example. I'm great with the concepts, and when I understand the concept, I'm great at implementing it in code. It is quite literally, the squigily lines (i.e. the calculus equations) that I don't understand. Isn't f(x) some sort of function, or loop or something? I can write functions. I can write loops. I don't know what f(x) is telling me to do. That's what I'm asking about. Not filters, and not the concepts of signal transformations.Maybe saying it another way, when reading a research paper, there's text which explains the concepts, which I can generally follow. They tend to do something like, "[concept], [concept], [concept], and this is expressed as [equation]." I can follow the concepts. I just can't read the equation.

When I say that I don't need to be able to "do the math," I think I'm just saying that I don't need to be able to independently derive the equations. (I might at some point, when I get more involved, but not now.) I just want to be able to read the equations.

**Lady Li**:"there's an extra step in going from an "analytical" result (x^2 + 2 e^(i*k*x)) to a result that can be implemented on a computer."If I can get to that "analytical result" of "(x^2 + 2 e^(i*k*x))", I'm golden. I can take that and run. If I understand conceptually that x represents some variable or factor, and i is some constant, and so on (which I almost always can, since that explanation is in the narrative of the paper,) that's all I need. I'm trying to figure out how to get from the squiggly lines to "(x^2 + 2 e^(i*k*x))."

Given this, it may still be much harder than I think. I can accept that as an answer, in which case, I'll buckle down and start learning calc. But I wasn't sure if my question was really clear, given the responses.

Thanks!!!

posted by ericc at 6:08 PM on July 3

I bookmarked for myself the textbook Engineering Mathmatics It might be helpful. At this stage of my career I consider being able to implement math I don't understand a core competency so, good on you. If you need the math to help you solbve a problem, that can help a great deal.

posted by shothotbot at 6:46 PM on July 3

posted by shothotbot at 6:46 PM on July 3

A simple way of thinking about integrals and derivatives in numerical terms is this:

A function f(x) is a lookup table indexed by x; values of x are ordered.

An integral is basically a summation (hence the big S) of values of f(x) along some interval, perhaps weighted by the distance between values of x; this is essentially the area under the curve.

A derivative is (f(x2)-f(x1))/(x2-x1), or the slope of a line connecting two adjacent values of f(x).

I learned calculus but thinking about things this way gets me pretty far for numerical implementations.

I think you can learn enough calculus to solve your problem pretty quickly. You don't need to learn how to find integrals and derivatives of functions analytically; just understand the basic concepts enough to get how to solve numerically, and how making your numerical solution infinitely precise converges on the analytical answer.

posted by vogon_poet at 7:23 PM on July 3

A function f(x) is a lookup table indexed by x; values of x are ordered.

An integral is basically a summation (hence the big S) of values of f(x) along some interval, perhaps weighted by the distance between values of x; this is essentially the area under the curve.

A derivative is (f(x2)-f(x1))/(x2-x1), or the slope of a line connecting two adjacent values of f(x).

I learned calculus but thinking about things this way gets me pretty far for numerical implementations.

I think you can learn enough calculus to solve your problem pretty quickly. You don't need to learn how to find integrals and derivatives of functions analytically; just understand the basic concepts enough to get how to solve numerically, and how making your numerical solution infinitely precise converges on the analytical answer.

posted by vogon_poet at 7:23 PM on July 3

*I've written my own FFT classes*

A Fourier transform (FT) of a function f(t) is an integral over all time of the function multiplied by a complex exponential. A fast Fourier transform (FFT) is a particular method for computing the discrete Fourier transform (DFT), which is the summation over all samples of the function multiplied by a complex exponential. If you've done FFTs/DFTs, you've done discrete-time integration!

From a computational perspective (not a rigorous perspective), an integral is just a sum. The bottom number on the integral (the s-symbol) is the "start" of the sum and the top number on the integral is the "end" of the sum. The differential (the d at the end of the integral) tells over what you are summing.

For instance, if you are integrating f(t)dt from t = 0 to t = infinity, the computational version of that is:

integral = 0;

dt = 0.001;

for(t = 0, t = t + dt, t < infinity)

integral = integral + f(t) * dt;

end for;

As dt gets smaller, you have a better approximation of what the summation of that function is. In particular, as dt approaches 0, you get exactly what the summation of that function is, with the slight disadvantage that the summation is no longer computationally feasible numerically (and hence has to be done analytically).

posted by saeculorum at 10:10 PM on July 3

*Isn't f(x) some sort of function, or loop or something?*

Okay, this is just really basic notational stuff then. People probably over-recommend khan academy, but seriously, watch a few hours of his calculus videos and you'll easily understand notation at the very least. Being able to solve problems, etc, takes a lot of practice, but the notation part is easy.

I don't think, however, that it'll help you understand wikipedia articles very much, because they seem to be written at a graduate mathematics level for the most part, for reasons I'll never understand.

posted by empath at 1:14 AM on July 4

Basically, there are analytical solutions to problems, and there are applied, numerical solutions to problems. Both are active fields of research in mathematics. You need to look up the numerical solutions to the problems/equations you are looking at. There are known numerical, programmable algorithms to solve (or approximately solve) lots of standard equations that are useful in real life. Since you say you have no problem with code, perhaps you could trace back from the numerical solver to the analytical one (if it exists, it can be the basis for the algorithm). And that would help you figure out the correspondence between the calculus and the numerical solution. Sometimes, though, while the numerical solution converges to the analytical one, the analytical solution can be complicated or impossible to program.

An engineering math or undergraduate numerical analysis text could help you out. They explain exactly the translation from equation to solution to code. It's different for different classes of problems.

posted by bluefly at 9:08 AM on July 4 [1 favorite]

An engineering math or undergraduate numerical analysis text could help you out. They explain exactly the translation from equation to solution to code. It's different for different classes of problems.

posted by bluefly at 9:08 AM on July 4 [1 favorite]

As others have said, it sounds like you just need to churn through a basic maths textbook until you understand the notation.

As an aside, given that we're talking about frequency domain analysis, it wouldn't be surprising if the i in the function

posted by pharm at 9:05 AM on July 5

As an aside, given that we're talking about frequency domain analysis, it wouldn't be surprising if the i in the function

f(x) = ewas the square root of -1.^{ix}

posted by pharm at 9:05 AM on July 5

You are not logged in, either login or create an account to post comments

Some of the difficulty you're having stems from the difference between the frequency domain (the Laplace domain with the unit

s, the Fourier domain with the unitω, and thez-domain with the unitz) and the time domain. Signal processing analysis can be generally be done in either domain, but for certain applications, one is naturally more suited. For filtering applications, the frequency domain is often used. However, all systems must be implemented in time domain even if they are analyzed in a different domain. Hence, you can end up with articles like the first linked that implement everything in the time domain. For digital systems, this is relatively easy, because it works out that all (LTI) digital filters can be implemented with just multiplications, additions, and sample delays.One of the characteristics of the frequency domain is that cascaded systems (filters) each have transfer functions in the frequency domain that can be

multipliedtogether to determine what the cascaded system (combination of filters) does. In the time domain, cascaded systems (filters) have transfer functions in the time domain that must beconvolvedto determine what the cascaded system (combination of filters) does. Convolution is inherently based in calculus, so there's not much that can be said about it from a rigorous perspective without calculus. However, from an intuitive perspective, convolution really tells what happens to functions when they are put "on top of each other" at various delays between two functions. There are some nifty applications to demonstrate this. From the perspective of your question, it is possible to do a lot of analysis on filters without doing calculus, so long as you stay in the frequency domain (and stay with relatively simple LTI systems).Note, this leads to the often-repeated statement that "convolution in the time domain is multiplication in the frequency domain and vice versa." In addition, it's why it's possible to do filtering in both the time domain (which is most common)

andthe frequency domain (via windowing functions).For digital systems, there is a direct correspondance between the frequency domain (in this case, the

z-domain) and the time domain - in particular,z^{-1}means a one sample delay. This is not the case for analog systems (which tend to use the Laplaces-domain or the Fourierω-domain) (it works out that all frequency domains are somewhat equivalent to each other, but that's a different question). However, again, this is why some pages you will see use calculus and some don't - it depends on what's being analyzed and how.The most direct answer to your question is: in signal processing, integrals very rarely need to be implemented in order to make filters work. However, in order to determine what filters do, you may need to use calculus, depending on the type of analysis you are trying to do.

posted by saeculorum at 3:19 PM on July 3 [4 favorites]