欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費電子書(shū)等14項超值服

開(kāi)通VIP
基于Visual C++6.0工具下的聲音文件操作

聲音是人類(lèi)傳遞信息的重要途徑,如果應用程序中包含聲音信息,就可以大大增強它的親合力;另外在科研開(kāi)發(fā)過(guò)程中,聲音信號的處理也是一個(gè)很重要的科學(xué)研究領(lǐng)域。Visual C++作為一個(gè)強大的開(kāi)發(fā)工具,當然是聲音處理的首選工具,但是在當前Visual C++相關(guān)的編程資料中,無(wú)論是大部頭的參考書(shū),還是一些計算機雜志,對聲音文件的處理都是泛泛的涉及一下,許多編程愛(ài)好者都感到對該部分的內容了解不是很透徹,筆者結合自己的學(xué)習和開(kāi)發(fā)過(guò)程中積累的一些經(jīng)驗,在本實(shí)例中來(lái)和廣大編程愛(ài)好者們探討一下聲音文件的處理,當然本實(shí)例也不可能包括聲音處理內容的方方面面,只是希望它能夠對剛剛涉及到聲音處理領(lǐng)域的朋友們起到一個(gè)引路的作用,幫助他們盡快進(jìn)入聲音處理的更深奧空間。

  當前計算機系統處理聲音文件有兩種辦法:一是使用現成的軟件,如微軟的錄音機、SoundForge、CoolEdit等軟件可以實(shí)現對聲音信號進(jìn)行錄音、編輯、播放的處理,但它們的功能是有限的,為了更靈活,更大限度地處理聲音數據,就不得不使用另外一種方法,既利用微軟提供的多媒體服務(wù),在Windows環(huán)境下自己編寫(xiě)程序來(lái)進(jìn)行聲音處理來(lái)實(shí)現一些特定的功能。下面就開(kāi)始介紹聲音文件的格式和在Windows環(huán)境下使用Visual C++開(kāi)發(fā)工具進(jìn)行聲音文件編程處理的方法。

  一、實(shí)現方法

  1、RIFF文件結構和WAVE文件格式

  Windows支持兩種RIFF(Resource Interchange File Format,"資源交互文件格式")格式的音頻文件:MIDI的RMID文件和波形音頻文件格式WAVE文件,其中在計算機領(lǐng)域最常用的數字化聲音文件格式是后者,它是微軟專(zhuān)門(mén)為Windows系統定義的波形文件格式(Waveform Audio),由于其擴展名為"*.wav",因而該類(lèi)文件也被稱(chēng)為WAVE文件。為了突出重點(diǎn),有的放矢,本文涉及到的聲音文件所指的就是WAVE文件。常見(jiàn)的WAVE語(yǔ)音文件主要有兩種,分別對應于單聲道(11.025KHz采樣率、8Bit的采樣值)和雙聲道(44.1KHz采樣率、16Bit的采樣值)。這里的采樣率是指聲音信號在進(jìn)行"模→數"轉換過(guò)程中單位時(shí)間內采樣的次數。采樣值是指每一次采樣周期內聲音模擬信號的積分值。對于單聲道聲音文件,采樣數據為八位的短整數(short int 00H-FFH);而對于雙聲道立體聲聲音文件,每次采樣數據為一個(gè)16位的整數(int),高八位和低八位分別代表左右兩個(gè)聲道。WAVE文件數據塊包含以脈沖編碼調制(PCM)格式表示的樣本。在進(jìn)行聲音編程處理以前,首先讓我們來(lái)了解一下RIFF文件和WAVE文件格式。

  RIFF文件結構可以看作是樹(shù)狀結構,其基本構成是稱(chēng)為"塊"(Chunk)的單元,每個(gè)塊有"標志符"、"數據大小"及"數據"所組成,塊的結構如圖1所示:

塊的標志符(4BYTES)
數據大小 (4BYTES)
數據
圖一、 塊的結構示意圖

  從上圖可以看出,其中"標志符"為4個(gè)字符所組成的代碼,如"RIFF","LIST"等,指定塊的標志ID;數據大小用來(lái)指定塊的數據域大小,它的尺寸也為4個(gè)字符;數據用來(lái)描述具體的聲音信號,它可以由若干個(gè)子塊構成,一般情況下塊與塊是平行的,不能相互嵌套,但是有兩種類(lèi)型的塊可以嵌套子塊,他們是"RIFF"或"LIST"標志的塊,其中RIFF塊的級別最高,它可以包括LIST塊。另外,RIFF塊和LIST塊與其他塊不同,RIFF塊的數據總是以一個(gè)指定文件中數據存儲格式的四個(gè)字符碼(稱(chēng)為格式類(lèi)型)開(kāi)始,如WAVE文件有一個(gè)"WAVE"的格式類(lèi)型。LIST塊的數據總是以一個(gè)指定列表內容的4個(gè)字符碼(稱(chēng)為列表類(lèi)型)開(kāi)始,例如擴展名為".AVI"的視頻文件就有一個(gè)"strl"的列表類(lèi)型。RIFF和LIST的塊結構如下:

RIFF/LIST標志符
數據1大小
數據1
格式/列表類(lèi)型
數據
 
圖二、RIFF/LIST塊結構

  WAVE文件是非常簡(jiǎn)單的一種RIFF文件,它的格式類(lèi)型為"WAVE"。RIFF塊包含兩個(gè)子塊,這兩個(gè)子塊的ID分別是"fmt"和"data",其中"fmt"子塊由結構PCMWAVEFORMAT所組成,其子塊的大小就是sizeofof(PCMWAVEFORMAT),數據組成就是PCMWAVEFORMAT結構中的數據。WAVE文件的結構如下圖三所示:

標志符(RIFF)
數據大小
格式類(lèi)型("WAVE")
"fmt"
Sizeof(PCMWAVEFORMAT)
PCMWAVEFORMAT
"data"
聲音數據大小
聲音數據
 
圖三、WAVE文件結構

      PCMWAVEFORMAT結構定義如下:

Typedef struct
{
 WAVEFORMAT wf;//波形格式;
 WORD wBitsPerSample;//WAVE文件的采樣大??;
}PCMWAVEFORMAT;
WAVEFORMAT結構定義如下:
typedef struct
{
 WORD wFormatag;//編碼格式,包括WAVE_FORMAT_PCM,WAVEFORMAT_ADPCM等
 WORD nChannls;//聲道數,單聲道為1,雙聲道為2;
 DWORD nSamplesPerSec;//采樣頻率;
 DWORD nAvgBytesperSec;//每秒的數據量;
 WORD nBlockAlign;//塊對齊;
}WAVEFORMAT;

  "data"子塊包含WAVE文件的數字化波形聲音數據,其存放格式依賴(lài)于"fmt"子塊中wFormatTag成員指定的格式種類(lèi),在多聲道WAVE文件中,樣本是交替出現的。如16bit的單聲道WAVE文件和雙聲道WAVE文件的數據采樣格式分別如圖四所示:

  16位單聲道:

采樣一
采樣二
……
低字節高字節低字節高字節
……

  16位雙聲道:

采樣一
……
左聲道
右聲道
……
低字節高字節低字節高字節
……

圖四、WAVE文件數據采樣格式 


 2、聲音文件的聲音數據的讀取操作

  操作聲音文件,也就是將WAVE文件打開(kāi),獲取其中的聲音數據,根據所需要的聲音數據處理算法,進(jìn)行相應的數學(xué)運算,然后將結果重新存儲與WAVE格式的文件中去??梢允褂肅FILE類(lèi)來(lái)實(shí)現讀取操作,也可以使用另外一種方法,拿就是使用Windows提供的多媒體處理函數(這些函數都以mmino打頭)。這里就介紹如何使用這些相關(guān)的函數來(lái)獲取聲音文件的數據,至于如何進(jìn)行處理,那要根據你的目的來(lái)選擇不同的算法了。WAVE文件的操作流程如下:1)調用mminoOpen函數來(lái)打開(kāi)WAVE文件,獲取HMMIO類(lèi)型的文件句柄;2)根據WAVE文件的結構,調用mmioRead、mmioWrite和mmioSeek函數實(shí)現文件的讀、寫(xiě)和定位操作;3)調用mmioClose函數來(lái)關(guān)閉WAVE文件。

  下面的函數代碼就是根據WAVE文件的格式,實(shí)現了讀取雙聲道立體聲數據,但是在使用下面的代碼過(guò)程中,注意需要在程序中鏈接Winmm.lib庫,并且包含頭文件"Mmsystem.h"。

BYTE * GetData(Cstring *pString) 
//獲取聲音文件數據的函數,pString參數指向要打開(kāi)的聲音文件;
{
 if (pString==NULL)
  return NULL;
 HMMIO file1;//定義HMMIO文件句柄;
 file1=mmioOpen((LPSTR)pString,NULL,MMIO_READWRITE);
 //以讀寫(xiě)模式打開(kāi)所給的WAVE文件;
 if(file1==NULL)
 {
  MessageBox("WAVE文件打開(kāi)失??!");
  Return NULL;
 }
 char style[4];//定義一個(gè)四字節的數據,用來(lái)存放文件的類(lèi)型;
 mmioSeek(file1,8,SEEK_SET);//定位到WAVE文件的類(lèi)型位置
 mmioRead(file1,style,4);
 if(style[0]!='W'||style[1]!='A'||style[2]!='V'||style[3]!='E')
  //判斷該文件是否為"WAVE"文件格式
 {
  MessageBox("該文件不是WAVE格式的文件!");
  Return NULL;
 }

 PCMWAVEFORMAT format; //定義PCMWAVEFORMAT結構對象,用來(lái)判斷WAVE文件格式;
 mmioSeek(file1,20,SEEK_SET);
 //對打開(kāi)的文件進(jìn)行定位,此時(shí)指向WAVE文件的PCMWAVEFORMAT結構的數據;
 mmioRead(file1,(char*)&format,sizeof(PCMWAVEFORMAT));//獲取該結構的數據;
 if(format.wf.nChannels!=2)//判斷是否是立體聲聲音;
 {
  MessageBox("該聲音文件不是雙通道立體聲文件");
  return NULL;
 }
 mmioSeek(file1,24+sizeof(PCMWAVEFORMAT),SEEK_SET);
 //獲取WAVE文件的聲音數據的大??;
 long size;
 mmioRead(file1,(char*)&size,4);
 BYTE *pData;
 pData=(BYTE*)new char[size];//根據數據的大小申請緩沖區;
 mmioSeek(file1,28+sizeof(PCMWAVEFORMAT),SEEK_SET);//對文件重新定位;
 mmioRead(file1,(char*)pData,size);//讀取聲音數據;
 mmioClose(file1, MMIO_FHOPEN);//關(guān)閉WAVE文件;
 return pData;
}

  3、使用MCI方法操作聲音文件

  WAVE聲音文件一個(gè)最基本的操作就是將文件中的聲音數據播放出來(lái),用Windows提供的API函數BOOL sndPlaySound(LPCSTR lpszSound, UINT fuSound)可以實(shí)現小型WAV文件的播放,其中參數lpszSound 為所要播放的聲音文件,fuSound為播放聲音文件時(shí)所用的標志位。例如實(shí)現Sound.wav 文件的異步播放,只要調用函數sndPlaySound("c:\windows\Sound.wav",SND_ASYNC)就可以了,由此可以看到sndPlaySound函數使用是很簡(jiǎn)單的。但是當WAVE文件大于100K時(shí),這時(shí)候系統無(wú)法將聲音數據一次性的讀入內存,sndPlaySound函數就不能進(jìn)行播放了。為了解決這個(gè)問(wèn)題,你的一個(gè)選擇就是用MCI方法來(lái)操作聲音文件了。在使用MCI方法之前,首先需要在你開(kāi)發(fā)的項目設置Project->Setting->Link->Object/library modules中加入winmm.lib。并在頭文件中包括"mmsystem.h"頭文件。

  MicroSoft API提供了MCI(The Media Control Interface)的方法mciSendCommand()和mciSendString()來(lái)完成WAVE文件的播放,這里僅介紹mciSendCommand()函數的使用。

  原型:DWORD mciSendCommand(UINT wDeviceID,UINT wMessage,DWORD dwParam1,DWORD dwParam2);

  參數:wDeviceID:接受消息的設備ID;

  Message:MCI命令消息;

  wParam1:命令的標志位;

  wParam2:所使用參數塊的指針

  返值:調用成功,返回零;否則,返回雙字中的低字存放有錯誤信息。

  在使用MCI播放聲音文件時(shí),首先要打開(kāi)音頻設備,為此要定義MCI_OPEN_PARMS變量 OpenParms,并設置該結構的相應分量:

OpenParms.lpstrDeviceType = (LPCSTR) MCI_DEVTYPE_WAVEFORM_AUDIO;//WAVE類(lèi)型
OpenParms.lpstrElementName = (LPCSTR) Filename;//打開(kāi)的聲音文件名;
OpenParms.wDeviceID = 0;//打開(kāi)的音頻設備的ID

  mciSendCommand (NULL, MCI_OPEN,MCI_WAIT | MCI_OPEN_TYPE | MCI_OPEN_TYPE_ID | MCI_OPEN_ELEMENT, (DWORD)(LPVOID) &OpenParms)函數調用發(fā)送MCI_OPEN命令后,返回的參數 OpenParms中成員變量的wDeviceID指明打開(kāi)了哪個(gè)設備。需要關(guān)閉音頻設備時(shí)只要調用mciSendCommand (m_wDeviceID, MCI_CLOSE, NULL, NULL)就可以了。

  播放WAVE文件時(shí),需要定義MCI_PLAY_PARMS變量PlayParms,對該變量進(jìn)行如下設置:PlayParms.dwFrom = 0,這是為了指定從什么地方(時(shí)間)播放WAVE文件,設置好以后,調用函數mciSendCommand (m_wDeviceID, MCI_PLAY,MCI_FROM, (DWORD)(LPVOID)&PlayParms));就實(shí)現了WAVE聲音文件的播放。 

  另外,調用mciSendCommand (m_wDeviceID, MCI_PAUSE, 0,(DWORD)(LPVOID)&PlayParms)實(shí)現了暫停功能。調用mciSendCommand (m_wDeviceID, MCI_STOP, NULL, NULL)實(shí)現停止功能等,可以看出,這些不同的功能實(shí)現都是依靠參數"Message"取不同的值來(lái)實(shí)現的。 不同的Message和dwParam1、dwParam2的組合還可以實(shí)現文件的 跳躍功能。如下面的代碼實(shí)現了跳轉到WAVE文件末端的操作:mciSendCommand (m_wDeviceID, MCI_SEEK, MCI_SEEK_TO_END, NULL)。

  4、DirectSound操作WAVE文件的方法

  MCI雖然調用簡(jiǎn)單,功能強大,可以滿(mǎn)足聲音文件處理的基本需要,但是MCI也有它的缺點(diǎn),那就是它一次只能播放一個(gè)WAVE文件,有時(shí)在實(shí)際應用中,為了實(shí)現混音效果,需要同時(shí)播放兩個(gè)或兩個(gè)以上的WAVE文件時(shí),就需要使用微軟DirectX技術(shù)中的DirectSound了,該技術(shù)直接操作底層聲卡設備,可以實(shí)現八個(gè)以上WAV文件的同時(shí)播放。

  實(shí)現DirectSound需要以下幾個(gè)步驟:1.創(chuàng )建及初始化DirectSound;2.設定應用程序的聲音設備優(yōu)先級別方式,一般為DSSCL_NORMAL;2. 將WAV文件讀入內存,找到格式塊、數據塊位置及數據長(cháng)度;3.創(chuàng )建聲音緩沖區;4.載入聲音數據;5.播放及停止:

  二、編程步驟

  1、 啟動(dòng)Visual C++6.0生成一個(gè)單文檔視圖結構的應用程序,將該程序命名為"playsound";

  2、 在程序的主菜單中添加"MCI Play"、"PlaySound"菜單,并使用Class Wizard添加相應的消息響應函說(shuō),分別用不同的方法來(lái)處理聲音文件;

  3、 在程序的"Link"設置中添加"dsound.lib、dxguid.lib、winmm.lib"庫,程序的視圖類(lèi)中包含"mmsystem.h"文件,程序的Debug目錄下添加待播放的聲音文件"chimes.wav和sound.wav";

  4、 添加代碼,編譯運行程序;

 三、程序代碼

////////////////////////////////////////////////////
void CPlaysoundView::OnMciplay()//下面的代碼實(shí)現了WAVE聲音文件的播放:
{
 // TODO: Add your command handler code here
 MCI_OPEN_PARMS mciOpenParms;
 MCI_PLAY_PARMS PlayParms;
 mciOpenParms.dwCallback=0;
 mciOpenParms.lpstrElementName="d:\chimes.wav";
 mciOpenParms.wDeviceID=0;
 mciOpenParms.lpstrDeviceType="waveaudio";
 mciOpenParms.lpstrAlias=" ";
 PlayParms.dwCallback=0;
 PlayParms.dwTo=0;
 PlayParms.dwFrom=0;
 mciSendCommand(NULL,MCI_OPEN,MCI_OPEN_TYPE|MCI_OPEN_ELEMENT,(DWORD)(LPVOID)&mciOpenParms);//打開(kāi)音頻設備;
 mciSendCommand(mciOpenParms.wDeviceID,MCI_PLAY,MCI_WAIT,(DWORD)(LPVOID)&PlayParms);//播放WAVE聲音文件;
 mciSendCommand(mciOpenParms.wDeviceID,MCI_CLOSE,NULL,NULL);//關(guān)閉音頻設備;
}
//////////////////////////////////////////////////////////////////////////////
/*下面的函數利用DirectSound技術(shù)實(shí)現了一個(gè)WAVE聲音文件的播放(注意項目設置中要包含"dsound.lib、dxguid.lib"的內容),代碼和注釋如下:*/
void CPlaysoundView::OnPlaySound() 
{
 // TODO: Add your command handler code here
 LPVOID lpPtr1;//指針1;
 LPVOID lpPtr2;//指針2;
 HRESULT hResult;
 DWORD dwLen1,dwLen2;
 LPVOID m_pMemory;//內存指針;
 LPWAVEFORMATEX m_pFormat;//LPWAVEFORMATEX變量;
 LPVOID m_pData;//指向語(yǔ)音數據塊的指針;
 DWORD m_dwSize;//WAVE文件中語(yǔ)音數據塊的長(cháng)度;
 CFile File;//Cfile對象;
 DWORD dwSize;//存放WAV文件長(cháng)度;
 //打開(kāi)sound.wav文件;
 if (!File.Open ("d://sound.wav", CFile::modeRead |CFile::shareDenyNone))
  return ;
 dwSize = File.Seek (0, CFile::end);//獲取WAVE文件長(cháng)度;
 File.Seek (0, CFile::begin);//定位到打開(kāi)的WAVE文件頭;
 //為m_pMemory分配內存,類(lèi)型為L(cháng)PVOID,用來(lái)存放WAVE文件中的數據;
 m_pMemory = GlobalAlloc (GMEM_FIXED, dwSize);
 if (File.ReadHuge (m_pMemory, dwSize) != dwSize)//讀取文件中的數據;
 {
  File.Close ();
  return ;
 }
 File.Close ();
 LPDWORD pdw,pdwEnd;
 DWORD dwRiff,dwType, dwLength;
 if (m_pFormat) //格式塊指針
  m_pFormat = NULL;
 if (m_pData) //數據塊指針,類(lèi)型:LPBYTE
  m_pData = NULL;
 if (m_dwSize) //數據長(cháng)度,類(lèi)型:DWORD
  m_dwSize = 0;
 pdw = (DWORD *) m_pMemory;
 dwRiff = *pdw++;
 dwLength = *pdw++;
 dwType = *pdw++;
 if (dwRiff != mmioFOURCC ('R', 'I', 'F', 'F'))
  return ;//判斷文件頭是否為"RIFF"字符;
 if (dwType != mmioFOURCC ('W', 'A', 'V', 'E'))
  return ;//判斷文件格式是否為"WAVE";
  //尋找格式塊,數據塊位置及數據長(cháng)度
 pdwEnd = (DWORD *)((BYTE *) m_pMemory+dwLength -4);
 bool m_bend=false;
 while ((pdw < pdwEnd)&&(!m_bend))
 //pdw文件沒(méi)有指到文件末尾并且沒(méi)有獲取到聲音數據時(shí)繼續;
 {
  dwType = *pdw++;
  dwLength = *pdw++;
  switch (dwType)
  {
   case mmioFOURCC('f', 'm', 't', ' ')://如果為"fmt"標志;
    if (!m_pFormat)//獲取LPWAVEFORMATEX結構數據;
    {
     if (dwLength < sizeof (WAVEFORMAT))
      return ;
     m_pFormat = (LPWAVEFORMATEX) pdw;
 
    }
    break;
   case mmioFOURCC('d', 'a', 't', 'a')://如果為"data"標志;
    if (!m_pData || !m_dwSize)
    {
     m_pData = (LPBYTE) pdw;//得到指向聲音數據塊的指針;
     m_dwSize = dwLength;//獲取聲音數據塊的長(cháng)度;
     if (m_pFormat)
      m_bend=TRUE;
    }
    break;
   }
   pdw = (DWORD *)((BYTE *) pdw + ((dwLength + 1)&~1));//修改pdw指針,繼續循環(huán);
  }
  DSBUFFERDESC BufferDesc;//定義DSUBUFFERDESC結構對象;
  memset (&BufferDesc, 0, sizeof (BufferDesc));
  BufferDesc.lpwfxFormat = (LPWAVEFORMATEX)m_pFormat;
  BufferDesc.dwSize = sizeof (DSBUFFERDESC);
  BufferDesc.dwBufferBytes = m_dwSize;
  BufferDesc.dwFlags = 0;
  HRESULT hRes;
  LPDIRECTSOUND m_lpDirectSound;
  hRes = ::DirectSoundCreate(0, &m_lpDirectSound, 0);//創(chuàng )建DirectSound對象;
  if( hRes != DS_OK )
   return;
  m_lpDirectSound->SetCooperativeLevel(this->GetSafeHwnd(), DSSCL_NORMAL);
  //設置聲音設備優(yōu)先級別為"NORMAL";
  //創(chuàng )建聲音數據緩沖;
  LPDIRECTSOUNDBUFFER m_pDSoundBuffer;
  if (m_lpDirectSound->CreateSoundBuffer (&BufferDesc, &m_pDSoundBuffer, 0) == DS_OK)
  //載入聲音數據,這里使用兩個(gè)指針lpPtr1,lpPtr2來(lái)指向DirectSoundBuffer緩沖區的數據,這是為了處理大型WAVE文件而設計的。dwLen1,dwLen2分別對應這兩個(gè)指針所指向的緩沖區的長(cháng)度。
   hResult=m_pDSoundBuffer->Lock(0,m_dwSize,&lpPtr1,&dwLen1,&lpPtr2,&dwLen2,0);
  if (hResult == DS_OK)
  {
   memcpy (lpPtr1, m_pData, dwLen1);
   if(dwLen2>0) 
   {
    BYTE *m_pData1=(BYTE*)m_pData+dwLen1;
    m_pData=(void *)m_pData1;
    memcpy(lpPtr2,m_pData, dwLen2);
   }
   m_pDSoundBuffer->Unlock (lpPtr1, dwLen1, lpPtr2, dwLen2);
  }
  DWORD dwFlags = 0;
  m_pDSoundBuffer->Play (0, 0, dwFlags); //播放WAVE聲音數據;
}

  四、小結

  為了更好的說(shuō)明DiretSound編程的實(shí)現,筆者使用了一個(gè)函數來(lái)實(shí)現所有的操作,當然讀者可以將上面的內容包裝到一個(gè)類(lèi)中,從而更好的實(shí)現程序的封裝性,至于如何實(shí)現就不需要筆者多說(shuō)了,真不明白的話(huà),找本C++的書(shū)看看(呵呵)。如果定義了類(lèi),那么就可以一次聲明多個(gè)對象來(lái)實(shí)現多個(gè)WAVE聲音文件的混合播放。也許細心的讀者朋友會(huì )發(fā)現,在介紹WAVE文件格式的時(shí)候我們介紹了PCMWAVEFORMAT結構,但是在代碼的實(shí)現讀取WAVE文件數據部分,我們使用的卻是LPWAVEFORMATEX結構,那末是不是我們有錯誤呢?其實(shí)沒(méi)有錯,對于PCM格式的WAVE文件來(lái)說(shuō),這兩個(gè)結構是完全一樣的,使用LPWAVEFORMATEX結構不過(guò)是為了方便設置DSBUFFERDESC對象罷了。

  操作WAVE聲音文件的方法很多,靈活的運用它們可以靈活地操作WAVE文件,這些函數的詳細用途讀者可以參考MSDN。本實(shí)例只是對WAVE文件的操作作了一個(gè)膚淺的介紹,希望可以對讀者朋友起到拋磚引玉的作用.





音樂(lè )就是一系列的音符,這些音符在不同的時(shí)間用不同的幅度被播放或者停止。有非常多的指令被用來(lái)播放音樂(lè ),但是這些指令的操作基本相同,都在使用各種各樣不同的音符。在計算機上進(jìn)行作曲,實(shí)際上是存儲了很多組音樂(lè ),回放時(shí)由音頻硬件將這些音符播放出來(lái)。

Midi格式(文件擴展名是.MID)是存儲數字音樂(lè )的標準格式。

DirectMusic 音樂(lè )片段(music segments)使用.SGT文件擴展名,其他的相關(guān)文件包括樂(lè )隊文件(band file .BND),這種文件里面包含樂(lè )器信息;弦映射表文件(chordmaps file .CDM)包含在回放時(shí)修改音樂(lè )的和弦指令;樣式文件(styles file .STY)包含回放樣式信息;模板文件(templates file .TPL)包含創(chuàng )造音樂(lè )片段的模板。

Midi是一種非常強大的音樂(lè )格式,惟一的不利因素是音樂(lè )品質(zhì)依賴(lài)于音樂(lè )合成器的性能,因為Midi 僅僅記錄了音符,其播放的品質(zhì)由播放音樂(lè )的軟硬件決定。MP3文件(文件后綴為.MP3)是一種類(lèi)似于波表文件的文件格式,但是MP3文件和WAV文件最 大的區別在于MP3文件將聲音壓縮到了最小的程度,但是音質(zhì)卻基本不變??梢杂肈irectShow組件播放MP3文件,DirectShow組件是一個(gè) 非常強大的多媒體組件,用DirectShow幾乎可以播放任何媒體文件,包括聲音和音頻文件,部分聲音文件我們只能用DirectShow播放。

Direct Audio是一個(gè)復合組件,它由DirectSound和DirectMusic兩個(gè)組件組成,如下圖所示:

DirectMusic在DirectX8中得到了巨大的增強,但是DirectSound基本保持原有的狀態(tài)。DirectSound是主要的數 字聲音回放組件。DirectMusic處理所有的樂(lè )曲格式,包括MIDI、DirectMusic本地格式文件和波表文件。DirectMusic處理 完之后將它們送入DirectSound中做其他處理,這意味著(zhù)回放MIDI的時(shí)候可以使用數字化的樂(lè )器。

使用DirectSound

使用時(shí)需要創(chuàng )建一個(gè)和聲卡通訊的COM對象,用這個(gè)COM對象再創(chuàng )造一些獨立的聲音數據緩沖區(被稱(chēng)之為輔助音頻緩沖區 secondary sound buffers)來(lái)存儲音頻數據。緩沖區中的這些數據在主混音緩存(稱(chēng)之為主音頻緩存 primary sound buffer)中被混合,然后可以用指定的任何格式播放出來(lái)?;胤鸥袷酵ㄟ^(guò)采樣頻率、聲道數、采樣精度排列,可能的采樣頻率有8000HZ, 11025HZ,22050HZ和44100HZ(CD音質(zhì))。

對于聲道數可以有兩個(gè)選擇:?jiǎn)瓮ǖ赖膯温暤缆曇艉碗p通道的立體聲聲音。采樣精度被限制在兩種選擇上:8位的低質(zhì)量聲音和16位的高保真聲音。在沒(méi)有修改的 情況下,DirectSound主緩沖區的默認設置是22025HZ采樣率、8位精度、立體聲。在DirectSound中可以調整聲音的播放速度(這同 樣會(huì )改變聲音的音調),調整音量 、循環(huán)播放等。甚至還可以在一個(gè)虛擬的 3D環(huán)境中播放,以模擬一個(gè)實(shí)際環(huán)繞在周?chē)穆曇簟?br>
需要做的是將聲音數據充滿(mǎn)緩沖區,如果聲音數據太大的話(huà),必須創(chuàng )建流播放方法,加載聲音數據中的一小塊,當這一小塊播放完畢以后,再加載另外的小塊數據進(jìn) 緩沖區,一直持續這個(gè)過(guò)程,直到聲音被處理完畢。在緩沖區中調整播放位置可以實(shí)現流式音頻,當播放完成通知應用程序更新音頻數據。這個(gè)通知更新的過(guò)程我們 稱(chēng)之為“通告”。在同一時(shí)間被播放的緩存數目雖然沒(méi)有限制,但是仍然需要保證緩沖區數目不要太多,因為每增加一個(gè)緩沖區,就要消耗很多內存和CPU資源。

在項目中使用DirectSound和DirectMusic,需要添加頭文件dsound.h和dmsuic.h,并且需要鏈接DSound.lib到包含庫中,添加DXGuid.lib庫可以讓DirectSound更容易使用。

以下是DirectSound COM接口:

IDirectSound8:DirectSound接口。
IDirectSoundBuffer8:主緩沖區和輔助緩沖區接口,保存數據并控制回放。
IDirectSoundNotify8:通知對象,通知應用程序指定播放位置已經(jīng)達到。

各個(gè)對象間的關(guān)系如下圖所示:



IDirectSound8是主接口,用它來(lái)創(chuàng )建緩沖區(IDirectSoundBuffer8),然后用緩沖區接口創(chuàng )建通告接口(IDirectSoundNotify8),通告接口告訴應用程序指定的位置已經(jīng)到達,通告接口在流化音頻文件時(shí)非常有用。

初始化DirectSound

使用 DirectSound的第一步是創(chuàng )建IDirectSound8對象,IDirectSound8起到控制音頻硬件設備的作用,可以通過(guò) DirectSoundCreate8函數來(lái)創(chuàng )建。

The DirectSoundCreate8 function creates and initializes an object that supports the IDirectSound8 interface.

HRESULT DirectSoundCreate8(
LPCGUID lpcGuidDevice,
LPDIRECTSOUND8 * ppDS8,
LPUNKNOWN pUnkOuter
);

Parameters

lpcGuidDevice
Address of the GUID that identifies the sound device. The value of this parameter must be one of the GUIDs returned by DirectSoundEnumerate, or NULL for the default device, or one of the following values.

 

ValueDescription
DSDEVID_DefaultPlaybackSystem-wide default audio playback device. Equivalent to NULL.
DSDEVID_DefaultVoicePlaybackDefault voice playback device.

 

ppDS8
Address of a variable to receive an IDirectSound8 interface pointer.
pUnkOuter
Address of the controlling object's IUnknown interface for COM aggregation. Must be NULL, because aggregation is not supported.

Return Values

If the function succeeds, it returns DS_OK. If it fails, the return value may be one of the following.

Return Code
DSERR_ALLOCATED
DSERR_INVALIDPARAM
DSERR_NOAGGREGATION
DSERR_NODRIVER
DSERR_OUTOFMEMORY

Remarks

The application must call the IDirectSound8::SetCooperativeLevel method immediately after creating a device object.

 

創(chuàng )建主音頻緩沖區

用 IDirectSoundBuffer對象控制主音頻緩沖區,創(chuàng )建主緩沖區不需要DirectX8的接口,因為這個(gè)接口從來(lái)沒(méi)有改變。用來(lái)創(chuàng )建音頻緩沖區的函數是IDirectSound8::CreateSoundBuffer。

The CreateSoundBuffer method creates a sound buffer object to manage audio samples.

HRESULT CreateSoundBuffer(
LPCDSBUFFERDESC pcDSBufferDesc,
LPDIRECTSOUNDBUFFER * ppDSBuffer,
LPUNKNOWN pUnkOuter
);

Parameters

pcDSBufferDesc
Address of a DSBUFFERDESC structure that describes the sound buffer to create.
ppDSBuffer
Address of a variable that receives the IDirectSoundBuffer interface of the new buffer object. Use QueryInterface to obtain IDirectSoundBuffer8. IDirectSoundBuffer8 is not available for the primary buffer.
pUnkOuter
Address of the controlling object's IUnknown interface for COM aggregation. Must be NULL.

Return Values

If the method succeeds, the return value is DS_OK, or DS_NO_VIRTUALIZATION if a requested 3D algorithm was not available and stereo panning was substituted. See the description of the guid3DAlgorithm member of DSBUFFERDESC. If the method fails, the return value may be one of the error values shown in the following table.

Return code
DSERR_ALLOCATED
DSERR_BADFORMAT
DSERR_BUFFERTOOSMALL
DSERR_CONTROLUNAVAIL
DSERR_DS8_REQUIRED
DSERR_INVALIDCALL
DSERR_INVALIDPARAM
DSERR_NOAGGREGATION
DSERR_OUTOFMEMORY
DSERR_UNINITIALIZED
DSERR_UNSUPPORTED

Remarks

DirectSound does not initialize the contents of the buffer, and the application cannot assume that it contains silence.

If an attempt is made to create a buffer with the DSBCAPS_LOCHARDWARE flag on a system where hardware acceleration is not available, the method fails with either DSERR_CONTROLUNAVAIL or DSERR_INVALIDCALL, depending on the operating system.


pcDSBufferDesc是一個(gè)指向DSBUFFERDESC結構的指針,保存所創(chuàng )建的緩沖區的信息。

The DSBUFFERDESC structure describes the characteristics of a new buffer object. It is used by the IDirectSound8::CreateSoundBuffer method and by the DirectSoundFullDuplexCreate8 function.

An earlier version of this structure, DSBUFFERDESC1, is maintained in Dsound.h for compatibility with DirectX 7 and earlier.

typedef struct DSBUFFERDESC {
DWORD dwSize;
DWORD dwFlags;
DWORD dwBufferBytes;
DWORD dwReserved;
LPWAVEFORMATEX lpwfxFormat;
GUID guid3DAlgorithm;
} DSBUFFERDESC;

Members

dwSize
Size of the structure, in bytes. This member must be initialized before the structure is used.
dwFlags
Flags specifying the capabilities of the buffer. See the dwFlags member of the DSBCAPS structure for a detailed listing of valid flags.
dwBufferBytes
Size of the new buffer, in bytes. This value must be 0 when creating a buffer with the DSBCAPS_PRIMARYBUFFER flag. For secondary buffers, the minimum and maximum sizes allowed are specified by DSBSIZE_MIN and DSBSIZE_MAX, defined in Dsound.h.
dwReserved
Reserved. Must be 0.
lpwfxFormat
Address of a WAVEFORMATEX or WAVEFORMATEXTENSIBLE structure specifying the waveform format for the buffer. This value must be NULL for primary buffers.
guid3DAlgorithm
Unique identifier of the two-speaker virtualization algorithm to be used by DirectSound3D hardware emulation. If DSBCAPS_CTRL3D is not set in dwFlags, this member must be GUID_NULL (DS3DALG_DEFAULT). The following algorithm identifiers are defined.

 

ValueDescriptionAvailability
DS3DALG_DEFAULTDirectSound uses the default algorithm. In most cases this is DS3DALG_NO_VIRTUALIZATION. On WDM drivers, if the user has selected a surround sound speaker configuration in Control Panel, the sound is panned among the available directional speakers.Applies to software mixing only. Available on WDM or Vxd Drivers.
DS3DALG_NO_VIRTUALIZATION3D output is mapped onto normal left and right stereo panning. At 90 degrees to the left, the sound is coming out of only the left speaker; at 90 degrees to the right, sound is coming out of only the right speaker. The vertical axis is ignored except for scaling of volume due to distance. Doppler shift and volume scaling are still applied, but the 3D filtering is not performed on this buffer. This is the most efficient software implementation, but provides no virtual 3D audio effect. When the DS3DALG_NO_VIRTUALIZATION algorithm is specified, HRTF processing will not be done. Because DS3DALG_NO_VIRTUALIZATION uses only normal stereo panning, a buffer created with this algorithm may be accelerated by a 2D hardware voice if no free 3D hardware voices are available. Applies to software mixing only. Available on WDM or Vxd Drivers.
DS3DALG_HRTF_FULLThe 3D API is processed with the high quality 3D audio algorithm. This algorithm gives the highest quality 3D audio effect, but uses more CPU cycles. See Remarks.Applies to software mixing only. Available on Microsoft Windows 98 Second Edition and later operating systems when using WDM drivers.
DS3DALG_HRTF_LIGHTThe 3D API is processed with the efficient 3D audio algorithm. This algorithm gives a good 3D audio effect, but uses fewer CPU cycles than DS3DALG_HRTF_FULL.Applies to software mixing only. Available on Windows 98 Second Edition and later operating systems when using WDM drivers.

需要設置的惟一一個(gè)值是dwFlags,這是一系列標志,用于決定緩沖區性能。
dwFlags
Flags that specify buffer-object capabilities. Use one or more of the values shown in the following table.

 

ValueDescription
DSBCAPS_CTRL3DThe buffer has 3D control capability.
DSBCAPS_CTRLFREQUENCYThe buffer has frequency control capability.
DSBCAPS_CTRLFXThe buffer supports effects processing.
DSBCAPS_CTRLPANThe buffer has pan control capability.
DSBCAPS_CTRLVOLUMEThe buffer has volume control capability.
DSBCAPS_CTRLPOSITIONNOTIFYThe buffer has position notification capability. See the Remarks for DSCBUFFERDESC.
DSBCAPS_GETCURRENTPOSITION2The buffer uses the new behavior of the play cursor when IDirectSoundBuffer8::GetCurrentPosition is called. In the first version of DirectSound, the play cursor was significantly ahead of the actual playing sound on emulated sound cards; it was directly behind the write cursor. Now, if the DSBCAPS_GETCURRENTPOSITION2 flag is specified, the application can get a more accurate play cursor. If this flag is not specified, the old behavior is preserved for compatibility. This flag affects only emulated devices; if a DirectSound driver is present, the play cursor is accurate for DirectSound in all versions of DirectX.
DSBCAPS_GLOBALFOCUSThe buffer is a global sound buffer. With this flag set, an application using DirectSound can continue to play its buffers if the user switches focus to another application, even if the new application uses DirectSound. The one exception is if you switch focus to a DirectSound application that uses the DSSCL_WRITEPRIMARY flag for its cooperative level. In this case, the global sounds from other applications will not be audible.
DSBCAPS_LOCDEFERThe buffer can be assigned to a hardware or software resource at play time, or when IDirectSoundBuffer8::AcquireResources is called.
DSBCAPS_LOCHARDWAREThe buffer uses hardware mixing.
DSBCAPS_LOCSOFTWAREThe buffer is in software memory and uses software mixing.
DSBCAPS_MUTE3DATMAXDISTANCEThe sound is reduced to silence at the maximum distance. The buffer will stop playing when the maximum distance is exceeded, so that processor time is not wasted. Applies only to software buffers.
DSBCAPS_PRIMARYBUFFERThe buffer is a primary buffer.
DSBCAPS_STATICThe buffer is in on-board hardware memory.
DSBCAPS_STICKYFOCUSThe buffer has sticky focus. If the user switches to another application not using DirectSound, the buffer is still audible. However, if the user switches to another DirectSound application, the buffer is muted.
DSBCAPS_TRUEPLAYPOSITIONForce IDirectSoundBuffer8::GetCurrentPosition to return the buffer's true play position. This flag is only valid in Windows Vista.
以下是創(chuàng )建聲音緩沖區的代碼:
    // setup the DSBUFFERDESC structure
    DSBUFFERDESC ds_buffer_desc;

// zero out strcutre
    ZeroMemory(&ds_buffer_desc, sizeof(DSBUFFERDESC));

ds_buffer_desc.dwSize        = 
sizeof(DSBUFFERDESC); 
ds_buffer_desc.dwFlags       = DSBCAPS_CTRLVOLUME;
ds_buffer_desc.dwBufferBytes = wave_format.nAvgBytesPerSec * 2;  
// 2 seconds
    ds_buffer_desc.lpwfxFormat   = &wave_format;

// create the fist version object
    if(FAILED(g_ds->CreateSoundBuffer(&ds_buffer_desc, &ds, NULL)))
{
// error ocuurred
        MessageBox(NULL, "Unable to create sound buffer", "Error", MB_OK);
}

設置格式


對于格式,有一系列的選擇,但是建議在11025HZ、16位、單通道;22050HZ、16位、單通道中選擇。選擇格式的時(shí)候,不要嘗試使用立體聲,立 體聲浪費處理時(shí)間,而且效果很難評估。同樣也不要使用16位以外的采樣精度,因為這會(huì )導致音質(zhì)的大幅下降。對于采樣頻率來(lái)說(shuō),越高越好,但是也不要設置超 過(guò) 22050HZ,在這個(gè)采樣頻率下,也能表現出CD音質(zhì)的水準而沒(méi)有太多的損失。

設置回放格式需要通過(guò)調用 IDirectSoundBuffer::SetFormat。

The SetFormat method sets the format of the primary buffer. Whenever this application has the input focus, DirectSound will set the primary buffer to the specified format.

HRESULT SetFormat(
LPCWAVEFORMATEX pcfxFormat
);

Parameters

pcfxFormat
Address of a WAVEFORMATEX structure that describes the new format for the primary sound buffer.

Return Values

If the method succeeds, the return value is DS_OK. If the method fails, the return value may be one of the following error values:

Return code
DSERR_BADFORMAT
DSERR_INVALIDCALL
DSERR_INVALIDPARAM
DSERR_OUTOFMEMORY
DSERR_PRIOLEVELNEEDED
DSERR_UNSUPPORTED

Remarks

The format of the primary buffer should be set before secondary buffers are created.

The method fails if the application has the DSSCL_NORMAL cooperative level.

If the application is using DirectSound at the DSSCL_WRITEPRIMARY cooperative level, and the format is not supported, the method fails.

If the cooperative level is DSSCL_PRIORITY, DirectSound stops the primary buffer, changes the format, and restarts the buffer. The method succeeds even if the hardware does not support the requested format; DirectSound sets the buffer to the closest supported format. To determine whether this has happened, an application can call the GetFormat method for the primary buffer and compare the result with the format that was requested with the SetFormat method.

This method is not available for secondary sound buffers. If a new format is required, the application must create a new DirectSoundBuffer object.


這個(gè)函數惟一的參數是指向WAVEFORMATEX結構的指針,該結構保存已設置的格式信息。

The WAVEFORMATEX structure defines the format of waveform-audio data. Only format information common to all waveform-audio data formats is included in this structure. For formats that require additional information, this structure is included as the first member in another structure, along with the additional information.

This structure is part of the Platform SDK and is not declared in Dsound.h. It is documented here for convenience.

typedef struct WAVEFORMATEX {
WORD wFormatTag;
WORD nChannels;
DWORD nSamplesPerSec;
DWORD nAvgBytesPerSec;
WORD nBlockAlign;
WORD wBitsPerSample;
WORD cbSize;
} WAVEFORMATEX;

Members

wFormatTag
Waveform-audio format type. Format tags are registered with Microsoft Corporation for many compression algorithms. A complete list of format tags can be found in the Mmreg.h header file. For one- or two-channel PCM data, this value should be WAVE_FORMAT_PCM.
nChannels
Number of channels in the waveform-audio data. Monaural data uses one channel and stereo data uses two channels.
nSamplesPerSec
Sample rate, in samples per second (hertz). If wFormatTag is WAVE_FORMAT_PCM, then common values for nSamplesPerSec are 8.0 kHz, 11.025 kHz, 22.05 kHz, and 44.1 kHz. For non-PCM formats, this member must be computed according to the manufacturer's specification of the format tag.
nAvgBytesPerSec
Required average data-transfer rate, in bytes per second, for the format tag. If wFormatTag is WAVE_FORMAT_PCM, nAvgBytesPerSec should be equal to the product of nSamplesPerSec and nBlockAlign. For non-PCM formats, this member must be computed according to the manufacturer's specification of the format tag.
nBlockAlign
Block alignment, in bytes. The block alignment is the minimum atomic unit of data for the wFormatTag format type. If wFormatTag is WAVE_FORMAT_PCM or WAVE_FORMAT_EXTENSIBLE, nBlockAlign must be equal to the product of nChannels and wBitsPerSample divided by 8 (bits per byte). For non-PCM formats, this member must be computed according to the manufacturer's specification of the format tag.

Software must process a multiple of nBlockAlign bytes of data at a time. Data written to and read from a device must always start at the beginning of a block. For example, it is illegal to start playback of PCM data in the middle of a sample (that is, on a non-block-aligned boundary).

wBitsPerSample
Bits per sample for the wFormatTag format type. If wFormatTag is WAVE_FORMAT_PCM, then wBitsPerSample should be equal to 8 or 16. For non-PCM formats, this member must be set according to the manufacturer's specification of the format tag. If wFormatTag is WAVE_FORMAT_EXTENSIBLE, this value can be any integer multiple of 8. Some compression schemes cannot define a value for wBitsPerSample, so this member can be zero.
cbSize
Size, in bytes, of extra format information appended to the end of the WAVEFORMATEX structure. This information can be used by non-PCM formats to store extra attributes for the wFormatTag. If no extra information is required by the wFormatTag, this member must be set to zero. For WAVE_FORMAT_PCM formats (and only WAVE_FORMAT_PCM formats), this member is ignored.
以下設置音頻格式為:11025HZ、單通道、16位。
    // setup the WAVEFORMATEX structure
    WAVEFORMATEX wave_format;

ZeroMemory(&wave_format, 
sizeof(WAVEFORMATEX));

wave_format.wFormatTag      = WAVE_FORMAT_PCM;
wave_format.nChannels       = 1;        
// mono
    wave_format.nSamplesPerSec  = 11025;    
wave_format.wBitsPerSample  = 16;
wave_format.nBlockAlign     = (wave_format.wBitsPerSample / 8) * wave_format.nChannels;
wave_format.nAvgBytesPerSec = wave_format.nSamplesPerSec * wave_format.nBlockAlign;

本站僅提供存儲服務(wù),所有內容均由用戶(hù)發(fā)布,如發(fā)現有害或侵權內容,請點(diǎn)擊舉報。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
基于 Visual C++6.0 的聲音文件操作 - VC & MFC - ★ VC 驛站...
waveOutPause 收藏
C#Winform做一個(gè)簡(jiǎn)單的呼叫中心的心得
VB DirectSound詳細說(shuō)明
找一個(gè)聲音控制插件 - 閱讀主題 :: Mozilla Firefox中文社區
Windows環(huán)境下的麥克風(fēng)錄音系統
更多類(lèi)似文章 >>
生活服務(wù)
分享 收藏 導長(cháng)圖 關(guān)注 下載文章
綁定賬號成功
后續可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服

欧美性猛交XXXX免费看蜜桃,成人网18免费韩国,亚洲国产成人精品区综合,欧美日韩一区二区三区高清不卡,亚洲综合一区二区精品久久