* Initial support for member suggestion (search and UI)
* Add custom `BottomSheetScaffold` implementation to workaround several scrolling bugs
* Start searching as soon as `@` is typed, add UI following initial designs
* Extract suggestion processing code
* Extract component, add previews, fix tests
* Add tests
* Add exception from kover to the forked bottom sheet code
* Add a feature flag for mentions
- Extract composer & mention suggestions to their composable.
- Extract mentions suggestions processing to its own class.
- Add `MatrixRoom.canTriggerRoomNotification` function.
- Update strings and conditions for displaying the `@room` mention.
---------
Co-authored-by: ElementBot <benoitm+elementbot@element.io>
Shows voice messages in the room summary.
Shows voice messages in the reply context menu and composer.
Show replies to voice messages in the timeline.
(before this PR voice messages were shown the same as audio messages)
Story: https://github.com/vector-im/element-meta/issues/2106
- New `AudioLevelCalculator` that outputs dB0v rescaled to the [0;1] range.
- `VoiceRecorder` now stores the audio levels sampled while recording, then resamples them to 100 samples to use as waveform preview.
- Waveform data is carried all the way as a `List<Float>` and converted to `List<Int>` in the [0;1024] range as per matrix spec only before sending it.
* Update the chat screen UI using `RoomInfo`.
This is specially useful for getting live values for `hasRoomCall`.
* Ensure the first `MatrixRoomInfo` is emitted ASAP
* Try excluding `*Present$present$*` inner functions from kover as separate entities
* Update strings
---------
Co-authored-by: ElementBot <benoitm+elementbot@element.io>
This is in preparation of further changes to the way the audio level is computed and to allow recording and sending of the waveform. The main reasoning behind the change is twofold:
1) We don't need the precision of Double in our context (we just need a rough indication of the changes in audio level to successfully draw a level meter or a waveform in our UI).
2) Performance: It is true that on 64 bit CPUs single operations involving Floats or Doubles take the same amount of time (i.e one clock cycle). But there are other aspects here that vouch in favor of Floats:
- A float takes half the space in memory compared to a double, so when storing long lists of them this can add up.
- On Android O and greater the ART runtime can "vectorize" certain operations on lists and make use of the CPU's SIMD registers which are generally 128 bits. So by using floats 4 of them can fit and be computed at the same time whilst with doubles only 2 will fit halving the throughput.
References:
- https://source.android.com/docs/core/runtime/improvements
- https://www.slideshare.net/linaroorg/automatic-vectorization-in-art-android-runtime-sfo17216
## Type of change
- [x] Feature
- [ ] Bugfix
- [ ] Technical
- [ ] Other :
## Content
This PR consists of several macro-blocks separated by path/package:
- `messages.impl.mediaplayer` : Global (room-wide) media player, now used only for voice messages but could be used for all media within EX in the future. It is backed by media3's exoplayer. Currently not unit-tested because mocking exoplayer is not trivial.
- `messages.impl.voicemessages.play` : Business logic of a timeline voice message. This is all the logic that manages the voice message bubble.
- `messages.impl.timeline.model` & `messages.impl.timeline.factories`: Timeline code that takes care of creating the `content` object for voice messages.
- `messages.impl.timeline.components` : The actual View composable that shows the UI inside a voice message bubble.
All the rest is just small related changes that must be done here and there in existing code.
From a high level perspective this is how it works:
- Voice messages are unlike other message bubbles because they carry state (i.e. playing, downloading...) so they have a Presenter managing this state.
- Media content (i.e. the ogg file) of a voice message is downloaded from the rust SDK on first play then stored in a voice messages cache (see the `VoiceMessageCache` class, it is just a subdirectory in the app's cacheDir which is indexed by the matrix content uri). All further play attempts are done from the cache without hitting the rust SDK anymore.
- Playback of the ogg file is handled with the `VoiceMessagePlayer` class which is basically a "view" of the global `MediaPlayer` that allow the voice message to only see the media player state belonging to its media content.
- Drawing of the waveform is done with an OSS library wrapped in the `WaveformProgressIndicator` composable.
Known issues:
- The waveform has no position slider.
- The waveform (and together with it the whole message bubble) is taller than the actual Figma design.
- Swipe to reply for voice messages is disabled to avoid conflict with the audio scrubbing gesture (to reply to a voice message you have to use the long press menu).
- The loading indicator is always shown (there is no delay).
- Voice messages don't stop playing when redacted.
## Motivation and context
https://github.com/vector-im/element-meta/issues/2083
## Screenshots / GIFs
Provided by Screenshot tests in the PR itself.
- Add additional states to preview.
- Add TODO description for commented code
- Move showUserDefinedSettingStyle from the node to the view for testability.