Placeholder Image

字幕表 動画を再生する

  • so lost you.

  • My colleagues not is something pretty exciting.

  • Using only the brows, a little bit of JavaScript and something called the Web audio a p I we were able to decipher brainwaves.

  • Now that sounds little, pretty little bit sensational, but it's it's mostly true.

  • So today I'd like to tell you the story of our journey through the lows and highs of Web audio on requested a cipher Brain waves in the browser.

  • So this is story off.

  • Discovery is about finding out what the Web can offer.

  • It's a story of exploration about pushing the boundaries of Web technologies.

  • Thio benefit to make a difference, and last, it's Well, it's a story has a really cool demo at the end.

  • So, uh, stay tuned.

  • But before I can tell you that story, I have to tell you this one.

  • So my name's Brady more.

  • I'm a full stack developer at an Australian medical technology startup called CIA so CIA provide destroyers, largest epilepsy management and diagnostic service.

  • In fact, the tagline of our corporate website is if lipstick diagnosis made easy now, I didn't know anything about epilepsy before I started working at CIA on and I guess that's kind of true for a lot of people.

  • So let me give you some of the facts.

  • So epilepsy is a chronic condition that effects over six million people worldwide.

  • Now the symptoms of that let's skin vary from person to person but could range from boss awareness, loss of bodily control to full body seizures.

  • Now these events can vary in frequency, sometimes having multiple times a day after once a month or even this is rails once a year.

  • But the results can be devastating regardless, and there's no known cure for all types of epilepsy.

  • But with a diagnosis.

  • Some of the symptoms can be treated, so the way that it flexes diagnosed is by reviewing a patient's brain waves in what's called an e e g or electroencephalogram.

  • So each of these traces they represent the electoral signals from certain parts of the patient's brain captured using electrodes on the head.

  • So at clinical team uses E.

  • G.

  • Just like these to help provide a diagnosis At the service that we offered to see, looks like this patient comes to clinics were to get connected with his electrodes to monitor their brain waves over the next seven days.

  • Send them home.

  • They can live their life in the comfort of their own home while being monitored.

  • Then, after that, time to come back to our clinic to get disconnected.

  • And then we spend a few weeks looking through that data and looking for something that might give us a diagnosis.

  • So we got seven days with the daughter.

  • What is it that the scientists are looking for?

  • Well, look, sometimes looking for the obvious.

  • They're looking for something like these minute long sieges that are pretty easy to capture on tape.

  • But if these are very rare events happening once a month or even once a year, it's not guarantee that will capture that during the seven days getting wanted it so more often they're looking for something a bit smaller.

  • They're looking for something like a needle in the haystack.

  • Well, in epilepsy, there's something called the characteristic signal of epilepsy, which is called a spike way.

  • So what?

  • These little spikes and little waves, um, that around half a second long each and so, while a patient might only have a seizure once a month, they might have dozens of these little spike waves every day.

  • And so it's these needle in a haystack said that scientists looking for In fact, you're looking at a whole kind of run.

  • A little spike waves right there.

  • Okay, that's great.

  • But what does a Web developer I have to do with it?

  • I mentioned before that our clinical team gets this daughter and then they start reviewing it.

  • But it's not the full story.

  • Like every second company these days, we have our own cloud platform, and it's a platform for secure management stories and review off medical data.

  • So one of the powerful things about this platform is that allows a clinical team to upload this E e g brainwave data and then review it anywhere at any time or through a browser.

  • One of the major technical challenges off this building is platform is the ability to view all they start on the platform.

  • So to tackle this problem, we've built something that's very creatively called out viewer to view the data, the name because it's hot and it looks something like this.

  • In fact, it looks like this.

  • This is a screen shot I just took in a few minutes ago.

  • And so it's this dot of you that are clinical team uses to review these seven days worth of data and to label events and to eventually deliver a diagnosis.

  • One of the big challenges in getting US scientists to move from traditional one of the big challenges that we face since Platform is getting a neuro physiologist, as they called to move from traditional desktop based software into a cloud platform.

  • Now, uh, they often neurophysiologist often flick through Edie, not a very quickly.

  • They're going four pages a second.

  • So flick, flick flick that is looking very quickly through this because if you want to get through lots of data, you don't be sending a lot of time on each page.

  • The way they're able to accomplish that is because they're trained to look for patterns.

  • They're looking for certain things like the spike and waves.

  • Remember, that's 1/2 2nd thing in a 12th page that you're looking at for these pages per second.

  • So you're looking for a needle in a haystack.

  • What happens when a needle doesn't look like a needle?

  • What happens when the e.

  • E g data you're trained to look for doesn't look quite right.

  • Well, in cases like those that was could lead to misdiagnosis and awful patient outcomes.

  • So this is our problem.

  • How can we take E E t review from desktop to the browser?

  • So there's one particular aspect off displaying this e e g.

  • That's very important, but also a little bit tricky, and that's called signal filtering.

  • So what's this problem of signal filtering that we're going to explore today?

  • Let's illustrate the problem on the left us an e e g data that has signal field is applied.

  • That's great.

  • On the right is that same daughter with filters turned off.

  • So clearly there's a very big difference between these two.

  • So why is this?

  • You know where it was still there?

  • It's still, you know, recording the brain here.

  • Well, like most electoral signals, when you're recording something, you're not just a recording that brain activity.

  • You're also recording a bunch of extra noise and artifact so some of this noise can come from.

  • It's a high frequency muscle.

  • Contractions could come from like DC drift, or it can come from like the mains electricity noise.

  • So all of these things aren't brain waves, but they still shoppin this E e g.

  • So what is it filter after all?

  • Well, no, it's not that one.

  • Let me explain what a filter is by talking a bit about music.

  • Well, like music.

  • So this is an audio spectrograph.

  • It's a visual representation off all the frequencies off a piece of audio, like your favorite song at a given point in time.

  • So it has, like low frequency is like the bass and drums, mid frequencies like another spoken voice and maybe higher frequencies like wind chimes or pan flutes or other super common instruments that you hear.

  • So, just like audio, your brain wave is also composed off.

  • A range of different frequencies Fall happening at the same time.

  • So think about music.

  • Imagine you're producing a song, but you've got this kind of bass noise you don't really want in the air or there's too much pan flute.

  • Well, one way to get rid of that is by filtering out like the lower frequency or filtering out the higher frequencies.

  • And so for E.

  • G is kind of three main filters we'd like to apply.

  • You can have a high pass filter which is like this, unless everything higher than the filter frequency passed through.

  • There's a notch filter, which notches or cuts out the theme free range of frequencies and a low pass, which is the opposite of your high pass filter.

  • Everything lower than the field Africans he passes through.

  • You can also combine them.

  • You can have multiple of these filters running at once, and that's exactly what we'd like to do.

  • So okay, how can we go about implementing filters and JavaScript in the browser on the cloud?

  • Well, look, first, could we not have a filter?

  • It'll you know the best code is the code.

  • Don't have to write, uh, but remember a couple of slides back with filtering without filtering.

  • That's that's kind of night and day right there.

  • So that's a bit of a no go.

  • I mean, I would've been happy with it, but so could we write our own filthy influence station?

  • Well, it's kind of a maybe, and because of stuff like this and and this and all this stuff.

  • So I trained as a particle physicist, not an electrical engineer, so I don't know too much red filtering, but I've been told that there's a bit of subtleties involved in implementing a filter on dhe.

  • I'm sure you could agree that jobs get probably isn't the best language to handle, you know, complex mathematical processes.

  • So we'll put that on the back burner.

  • Maybe.

  • Could we use a package?

  • You know, where jobs keep developers off parole?

  • That's what we're trying to do.

  • So how about this one?

  • This is promising DSP kit.

  • It's a digital signal processing library.

  • It seems like in process that signals it is not intended to be used.

  • It's good to play with them.

  • How about this one D S.

  • P.

  • J s Now, Actually, this one is.

  • I think it's the most starred, A signal processing library and JavaScript.

  • So you thought you'd think that's probably the way to go, but it hasn't had a meaningful update in nine years.

  • Uh, I mean, I wasn't even coating nine years ago, so I think if it's happened in the Web since then, so maybe maybe that's not the way either.

  • So there's three options were exhausted.

  • What are we left with?

  • Well, if you read the title, this talk, you probably know what's coming next.

  • So I know I won't.

  • I won't leave you in suspense.

  • So introducing there's very quiet drum rolls that you could just heard then but the Web audio, FBI.

  • So this is a way of FBI that's, uh, design for processing audio.

  • It's standardized.

  • We'll maintain as decent browser support and active group of contributors and uses.

  • So looking for looking for something works now Web audio is probably the way to go.

  • Well, when I was first having to use Web audio, the Mozilla developing Eric or Indian docks were invaluable and actually just learned at this conference that many of these docks are written by live GS is very own.

  • Ruth.

  • So big shot.

  • That's the roof for helping you learn this.

  • But let me take you through some of the basic concept here.

  • So how do you use the Web or two later?

  • I had actually use it to process this data, or first you create an audio context.

  • This is kind of the canvas where everything happens Inside this audio context, you create a bunch of nodes, one of which being your input note.

  • So this might be in a Silla scope, or this might be like you're feeding microphone into there.

  • Next, you can make a bunch of a fixed nodes so you could do things like a river or compression panning and many more.

  • All things that you know greatly with audio.

  • Maybe don't put reverb on the e e g Just just may you could also.

  • And then you've got to specify the destination.

  • So where is this getting output to?

  • Conventionally, that's gonna be your speakers.

  • All right, So off you do that, you string all those things together, you know.

  • 12121 Then you process the entire thing, and they go major process audio.

  • Okay, that's good.

  • But I might have jumped the gun a little bit Here.

  • Uh, what does an audio ap I have to do with processing e g daughter?

  • Like, Why Why Why did I jump from, you know, packages and JavaScript to this?

  • Well, it all comes down to one of the effects notes, so the effects notes could do really cool things on your daughter.

  • Such as reverb, compression, panning.

  • It can also do filtering.

  • So we had that audio spectrograph where those filters were applied.

  • Turns out, filtering is a thing that musicians and music engineers actually do so it makes sense that they'd have really well admitted filters in this, a p I.

  • So we could somehow leverage these filters and use them for Yu gi filtering.

  • Then we'd be set.

  • Okay, so in particular, we want one, uh, one message that's really used for us.

  • Called the crate.

  • I are filled to met it Now.

  • Quick Election Engineering lesson I r stands for infinite impulse response.

  • That's all you need to know if you know anything about it else about that, Uh, but it's the filter you can generate with this.

  • Great, I feel to note, creates an i.

  • R a filter, and it's super configurable, so you could make it.

  • Hype are slow pass, not whatever you want him there.

  • So here's the plan.

  • First, we need to make some way off reading or getting their Web audio to read out Di daughter.

  • Once it's there, we can process the starter.

  • We can apply the filters and finally we out put this back under brainwaves displayed on the screen.

  • Cool, that's so well and good.

  • But how exactly do you read E.

  • G with an audio?

  • A P I I'm glad that you asked you.

  • You ask that question because we're about to find out.

  • So here are the six steps that you need to decipher your own brain waves in the browser with the Web or the baby, I Let's go.

  • All right.

  • Entering V s code mode now.

  • So first we need to set up an audio context number.

  • This is the canvas where everything happens next.

  • You need to, you know, create an audio context That's a simple is calling instructor.

  • You passed in the number of channels, the number of samples preach channel and your sample right on there.

  • Now, this is an offline audio context and kind of this to audio context.

  • You can use the offline audio context in the Web or the FBI or something called the audio context Makes sense.

  • So the thing about the audio context is this is the one you'd usually going to use if you're processing audio.

  • If you're making like, a multi track mixer or you want to apply some reverb in the web, that's great.

  • The problem with it, though, is that it plays back the audio more or less in real time.

  • So it's good if you're actually you know, when I applied your affects in real time on music.

  • But if you've got seven days with e d dot just look through.

  • You really don't want to be waiting seven days for that to process.

  • So that's that's maybe not the way to go.

  • There's another type of off audio contact you could use called the offline audio context, so it's quite similar to the regular order context, except just taken directly from Indian.

  • In contrast with the standard standard audio context and often order conducted doesn't render the order to the device hardware.

  • Instead, it generated as fast as it can and outputs the result to an audio buffer.

  • Now that sounds brilliant.

  • So basically what context is slow, often noted into the super fast what we're trying to do?

  • So that's why we're going with it.

  • We've made this audio context.

  • Next, we have to set up our source notes pretty straightforward, create a buffer source and note that these methods are all coming off the audio context so you can actually create nodes in the Web audio without actually being attached to its context.

  • Next up, we need to create in Phyllis Audio Buffa.

  • So first up, we're going to create a buffer, has got the number of channels sample separated.

  • We want go to loop over our channels and then for each channel, I'm going to kind of get the channel buffa that I just created.

  • Remember, this is like an empty, empty audio buffer.

  • So get the channel buffer for that channel.

  • I'm gonna get right over all the samples and then right here, I'm going to, you know, gotta Channel Buffer.

  • I'm going to fill that without E e g data, which is in the very creatively named data Array.

  • So our e G data is just in an array of an array of floats in this data, and something really exciting is happening here.

  • So right there on the left, that's an audio buffet.

  • That's audio.

  • That stuff you can process through the web.

  • What you FBI on the right is your e D.

  • Data, and in the middle is an equal sign.

  • So what you're doing is you're moving your e e g data into an audio buffer.

  • Which means right now you can put filter and river compression.

  • All this cool stuff.

  • You really shouldn't do on E G.

  • But you can just like that.

  • That's really cool.

  • Next, you just assign, you know, in the audio buffer to your source and they go, You've got your input.

  • You fed your easy donna into the web.

  • Audio, FBI.

  • Okay, next up, we need to generate the filters.

  • Well Ah, a bit of a bit of background on this.

  • So the creator I off filter method, remember, That's kind of super configurable, but it's super configurable in the way you have to provide certain what's called coefficients to define what the filter looks like now there many different filled it has.

  • One could use it all slightly different from our research.

  • We found out that desktop software is usually use something called a Butterworth filter, and that's, you know, pretty standard across a lot of filter types.

  • Uh, surprisingly, you know, Java script doesn't have its own generator Butterworth filter coefficient implementation.

  • So we kind of had to write our own, which looks something like this and this and this, and it just keeps going.

  • Is there any more pages?

  • There's this one more page.

  • There's about 200 lines of code there that uses like complex numbers and poles and transforms and actually imported it from Matt Lab Coat, one of our colleagues wrote into JavaScript.

  • So it's possible a bit messy, bit long.

  • But there you go.

  • So we've got our Butterworth coefficients.

  • Once we have that, we can generate to the IRA filter by passing these coefficients in.

  • There you go.

  • That's, you know, six lines.

  • We have a filter notes.

  • Next up.

  • We need to connect the source of the destination.

  • So what's happening here is we're connecting the source to the low pass, the low pass, the high class high pass to the notch, not to the destination all the way through.

  • Finally, we need to process an app with this audio.

  • So we've set up this big audio graph, as it's called now we need to render his photograph.

  • So Step one is sitting.

  • What our stock point is, you know, makes sense that you want to stop processing from the start, which is what's happening here.

  • Then we need to call this start rendering method.

  • So this is on the offline audio countries.

  • We can do this, and it's super cool because it returns a promise which resolves to your processed by far.

  • What I'm gonna do now is, you know, just initialize where the output daughter is going to go and then kind of do the reverse of what we did to feed it in.

  • Gonna loop over all these channels, get the channel buffer, get the hair, the channel buffer for those.

  • But now this is full of data instead of empties before and for some reason, this channel buffa, is returns attacked array rather than a regular array.

  • Now, the way to convert from a type to ray to a regular Ray is go to stack, overflow and type out a way to convert from attack to a to a regular ray and copy paste the code they tell you right there.

  • So that's that's the simplest way I found.

  • It works and we return our process panel group data and then you go six steps.

  • We've gotta die.

  • We fed it through a filter that we process it.

  • This is filtered E e g data right there.

  • So after six steps were able to turn that to that and we got so much mileage out of these, these images is like the third time you've used this talk.

  • So look, you know, that's what the time where I would, you know, bring out a demo and show you how it works and then get on a plane back to Australia.

  • But, you know, life life isn't that simple.

  • We there was a few kind of problems with this, and one of the biggest problems is it was very slow, very, very slow.

  • So some ever that our, uh, our scientists, they want a review, like, four pages off Donna for a second.

  • So that's 250 milliseconds per page.

  • Okay, Well, to apply these filters, it takes 300 milliseconds.

  • It's already over budget, and then you need to factor in fetching and down and start doing the filtering and then rendering that all then clearly, you know, you get you nowhere near what we need to hit that goal.

  • So this is what I call you know, just some minor roadblocks.

  • So what's the problem?

  • The problem is this crate I feel to make it.

  • You know, it takes around 100 milliseconds to generate a filter which is reasonable because you have to provide these coefficients generate a new filter.

  • Super configured from scratch.

  • That's okay.

  • But you're doing that, you know, Not just once, but 123 times.

  • That's for three times, which is, you know, 303 100 milliseconds.

  • The pain doesn't end there, because now you have lots and lots of things because every page of data that you're looking at has to generate its own filters.

  • So one page, 300 miles.

  • Take another page, 300 seconds.

  • So it just goes on and on and on.

  • So clearly, this is not gonna work.

  • So where can we go from here?

  • Well, look, we could use web workers.

  • Maybe I'm just putting that they're having actually tried it.

  • But, you know, even if we could get Web workers to use web audio, it solves the problem of the brows.

  • Are still taking that 300 seconds.

  • It might be off the main threat, but it's still that waiting time there.

  • Use web assembly.

  • Maybe Rustin.

  • See those languages a far better suited to writing your own filled implementation than Jarvis could, Which is what we talked about before.

  • Um, but it was so close.

  • We're so close.

  • We're almost got there.

  • We put out a lot of filtered.

  • I don't want to jump ship and go to, you know, Ross still see, just just right now.

  • So maybe could you feel from the back end?

  • You know, similar to Web assembly, Pap says, suicide languages that a better at running fielding lamentations.

  • But there's a bit of ah Leighton seen having to, you know, change the filter.

  • Fetch some daughter change.

  • The filter again fits the more data.

  • It's just a lot of dollars flow through.

  • Maybe.

  • Is this something else we could do?

  • All right, Well, when you're in the INGE and so I'll tell you about a tale of two filters now.

  • So what is that problem?

  • The problem is this.

  • The great II filter, uh, filter method.

  • It's good.

  • It's super configure over it, you know, represent the second order filter configurable with several different common filter types, but it's been slow to make.

  • So what's the solution?

  • Well, the solution is there's an old proverb to hide a tree.

  • Use a forest so you know, how about you know, the Js completo pissed 2019.

  • Take on it.

  • Two silver filter problem.

  • Use a filter, right.

  • Just two notes about this I don't know if my problem actually makes any sense.

  • Maybe not.

  • And secondly, I thought that when I was writing this, I thought the topper was actually a super, you know, really ancient wise problem.

  • But it turns out it's just from Ninja Boy in Pokemon emerald.

  • So it's just something stuck with me for 14 years.

  • Um, but, you know, there you go.

  • So anyway, filters.

  • There's another filter that we could actually use in Web audio, and it's called The bike would filter.

  • Note.

  • Now, if you look.

  • If it seems like that description looks pretty similar to what we just saw before, you know you'd be right.

  • Those are the different layers of the two Ah, descriptions written.

  • The Indian docks give you a second to spot the difference.

  • There's not really much there, so let's do our own kind of comparison.

  • So are they.

  • Both.

  • They both generates second order filters.

  • Yep, that's good.

  • That's important because desktop software is used second audio filters.

  • Can you create your own custom filter with it?

  • Well, yes, for the i.

  • R.

  • Filter.

  • No, for the bike would feel kind of use what's given to you.

  • Okay, that's That's all right.

  • Is it a Butterworth filter?

  • So remember, we generate our own Butterworth filter with the create I'll Filter Method Bunch For the life of me, I couldn't find out what exact filter algorithm was used to create.

  • The bike would filter, and that's important because if something is tilted slightly different, it's gonna look slightly different, and it's looking slightly different.

  • You might not see it when you're flicking through four pages a second, finally, kind of the big the big one is the time taken to generate these filters so clearly the bike would filter is a big winner here, taking 10 milliseconds versus the 100 milliseconds for the way I feel tonight.

  • So the question still remains is, does this filter filter the way we wanted to do?

  • After a bit of research, we can say with success.

  • Yes, so the filter daughter coming out of a bike would filter is basically more or less the same as the doctor coming through our desktop software.

  • So I could feel to note we just looks another big upside on this is that.

  • Remember, before we had our Butterworth filter code 200 lines of Matt Lab ported code with transforms in complex numbers.

  • Well, there's the coach right of butter with a bike would filter.

  • You know, that's nine lines plus change if you, you know, compress things a little bit.

  • So I call this an absolute wind.

  • So success things are reckoned.

  • So let me give you a dental off.

  • These field is actually being applied, so that's not the way.

  • All right, so right now we have some about e e g daughter, and this is, you know, usually what the conditions would do, what would look at.

  • So right now, none of the field is a replied, and usually they be looking through this daughter and look so right, you know, without a filter.

  • But sometimes you get really noisy parts like this.

  • This might be where your muscle artifacts coming in or signal noise.

  • So if you try to review something in here, that could be all the spike waves underneath this noise, and you'd never see it.

  • So what you can do, though, if you could just turn on your filters.

  • Yeah, and right there.

  • That's so much easier to see.

  • This is so much closer to what the rial e e g data.

  • Looks like you know, you can also specify exactly what Phil do you want.

  • You want to make a 40 hurts low pass.

  • You wanna put a high, pass it, say now, three.

  • So you could do that on the fly.

  • Now, right here.

  • I've got another bit of e g.

  • So this is without filters applied.

  • I mean, it's Tony's, filled his own.

  • And what you see underneath all of this right here, there's a spike in way.

  • So if you're looking at this and it's a bit noisy and you're flicking through page after page, you might not see disappear right here.

  • But if you have these filters applied, you be able to see the underlying epileptic signal there and be able to give the correct diagnosis.

  • So filters in the Web.

  • All right, so we've come a long way together.

  • We've learned a bit about epilepsy about e e g about brain waves and how we can use the Web or a P.

  • I didn't help decipher these brainwaves.

  • Now, before I fly back home to the land of ah great beyond, let me leave you with some closing remarks.

  • So why does this matter.

  • You know, why am I giving a talk about Web audio on brain waves?

  • Well, look, there's many kind of other better talks that have been given about web audio.

  • Could have taught you how to make your own multitrack mixer.

  • And how do you know?

  • Apply Riva and do these cool things.

  • Instead, I gave you this one.

  • Well, the reason is at my company, CIA, You know, we helped.

  • We try to help people.

  • You know, we've helped over 1000 patients of our last two years of operation.

  • We're seeing over 40 patients.

  • Are we coming through?

  • Our service is and so I fed the end of the day what they want.

  • What these patients want is a diagnosis.

  • And as we saw, if you don't see the right data not going to get the right diagnosis, you don't get the right diagnosis.

  • You can't get the right treatment.

  • And, you know, in some small way, the web audio ap eyes actually helping with this so kind of technology can make a difference here.

  • You know, it's that the heart of what we're doing and its heart of what we and I mean all of us are kind of doing so.

  • I'm not sure that the writers of the Web or the FBI spirit kind of envisioned, you know, filters being used for epilepsy diagnosis.

  • But it is, and it's getting done right here.

  • So take.

  • Nobody can make a difference.

  • But it's only when you apply technology in the right way that really can.

  • So at the end of the day, it's, you know where the ones that making the difference we get to choose how this technology is used.

  • You know, we get to choose where I spent time and look, look, we're not We're not all volunteering in.

  • Foreign countries are working not for profits, you know, But we can still use the Web technologies to make a difference in the lives of others.

  • All right, on a bit of a lot of note, I'd like to leave you with one more final demo.

  • So perhaps there's a question on everyone's mind.

  • Previously we did this.

  • We converted brainwaves toe audio, back to brain waves.

  • But what if we did this brainwaves audio?

  • Just leave it there.

  • All right.

  • Well, for the first time on any or at least this particular jazz come stage.

  • I'd like to present to you the symphony of the mind.

  • Let's see how we go.

  • Well, we don't else There is all right.

  • But a super cool enable core demo button here called Emma was enabled.

  • All right, let's begin.

  • It's like airplane noise now.

  • It's very peaceful.

  • Slowly Just cut it down to nothing, right?

  • Yeah.

so lost you.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

Web Audio APIで脳波を解読する by Braden Moore|JSConf Budapest 2019 (Deciphering Brainwaves with the Web Audio API by Braden Moore | JSConf Budapest 2019)

  • 3 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語