Feature Requests

Action: Adjust Play Group Mutual Exclusivity (Toggle/Set)
Description: Add Actions to change a group’s mutual exclusivity behavior at runtime. This allows users to dynamically switch whether clips in a group can play simultaneously or whether starting one clip should stop the others. Problem: Mutual exclusivity is a powerful structural control (e.g., verse/chorus switching, one-of-many variations), but today it is typically a static configuration: Users may want the same group to behave differently at different times. - Example: during a build, allow layers to stack; during the main arrangement, enforce “only one at a time”. Without runtime control, users must create duplicate groups/pages or redesign clip layouts to achieve dynamic behavior. This limits performance flexibility and complicates controller-driven workflows. Proposed Solution: 1) Add group exclusivity Actions Provide Actions such as: Set Group Exclusivity: Exclusive / Non-Exclusive Toggle Group Exclusivity Optional: Set Exclusivity Mode variants if more than one exists (e.g., "Stop others on start" vs "Fade out others" if supported) 2) Scope and target selection Target a specific group by name/ID. Optional: apply to selected groups, or "current group". 3) Performance-safe timing (optional) Allow exclusivity changes to be: - immediate, or - quantized (next beat/bar) to avoid unexpected mid-bar stopping behavior. 4) Clear UI feedback Show an indicator that exclusivity was changed (and current state) to prevent confusion during performance. Benefits: Enables dynamic arrangement behavior without duplicating groups or redesigning layouts. Supports advanced performance macros: switch from “layering mode” to “section switch mode” instantly. Makes controller-driven rigs more expressive and less brittle. Reduces project complexity by keeping a single group that can change behavior when needed. Examples: Build-up layering: - Set group to non-exclusive so multiple clips can stack during a crescendo. - Then set group to exclusive for verse/chorus switching. Rehearsal vs show: - Non-exclusive during rehearsal for experimentation. - Exclusive during show for strict arrangement control. Footswitch macro: - One footswitch toggles group exclusivity while another triggers next clip, enabling different behaviors on demand. This summary was automatically generated by GPT-5.2 Thinking on 2026-01-09 . Original Post: An action to adjust group mutual exclusivity It would be great if there were an action to adjust group mutual exclusivity. This would allow me to easily create scenes on-the-fly.
1
·
chatgpt-proof-read-done
·
under review
Action: Select Random Preset (Per AUv3 Instance)
Description: Add an Action that selects a random preset for a chosen AUv3 plugin instance. The random selection should be usable from Actions, MIDI bindings, and Follow Actions, enabling controlled “happy accidents” and rapid sound exploration during performance and sound design. Problem: Preset browsing can be slow and distracting in live and creative workflows: Users often want fast inspiration (“give me a new sound now”) without scrolling long preset lists. Sound-design exploration benefits from quick randomization, but doing it manually breaks flow. Performance setups may want controlled variability (e.g., new texture each section) without touching the plugin UI. Without a built-in random preset action, users rely on external workarounds or abandon the idea. Proposed Solution: 1) New Action: "Select Random Preset" Target: a specific AUv3 plugin instance. Behavior: pick one preset at random and load it. 2) Define the preset pool (crucial) Provide options to constrain what “random” means: Source: Factory / User / Both Scope: Current bank/folder only (if applicable) or entire preset list Favorites only (optional) Exclude favorites (optional) Exclude current preset (default on) Optional “avoid repeats” window (e.g., do not repeat any of the last N presets) 3) Performance-safe switching (optional but valuable) Optional quantization: apply the new preset on next beat/bar to avoid abrupt mid-phrase changes. Optional short fade or “safe switch” mode (where feasible) to reduce pops/clicks. 4) Follow Actions + automation integration Allow Follow Actions to call "Select Random Preset" (e.g., at clip end, every N bars, on section change). Make the action bindable via MIDI so a footswitch can trigger “random next sound”. 5) UI/feedback Show the newly selected preset name immediately (in the AUv3 preset area and/or a small toast). If no presets exist in the selected pool, show a clear message and do nothing. Benefits: Faster sound exploration and inspiration without breaking creative flow. Enables controlled variability in live sets (new preset per section/loop/run). Reduces reliance on manual preset browsing and plugin UI navigation. Makes AUv3 preset workflows more performable and “instrument-like”. Examples: Sound design discovery: - A single button triggers "Select Random Preset" on a synth instance, excluding the current preset and avoiding the last 10 repeats. Live variation: - At the end of a clip, a Follow Action selects a random preset from Favorites only, quantized to the next bar. Foot-controlled exploration: - A footswitch triggers “random preset” on a texture plugin while playing, keeping hands on the instrument. This summary was automatically generated by GPT-5.2 Thinking on 2026-01-09 . Original Post: It would be amazing if button widgets etc had the ability to select a random preset from a defined list of presets for an effect or device (Similar to how the dial widget can select from a list of presets). This would make for some very fun improvisation
1
·
chatgpt-proof-read-done
·
under review
Ability to Select Individual Tracks for Session Recording
Description: Currently, Loopy Pro’s session recording captures either a combined mix of all tracks or whole input/output groups. The request is to allow users to select individual tracks (e.g. specific loops, inputs, buses) to include in or exclude from the session record. Problem: Recording always captures everything passed through the mixer (e.g. combined inputs/outputs), making it impossible to isolate specific elements. Users can’t choose to record only certain tracks or stems, limiting flexibility for multitrack recording workflows. This current limitation hinders recording individual loop layers separately or exporting stems for mixing in a DAW. Proposed Solution: Add a UI for session record configuration that lists all tracks/buses/inputs. Allow users to toggle on/off each individual track for inclusion in session recording. Recording session should then produce separate audio files per selected track, similar to the “Individual Outputs” feature but user-selectable. Benefits: Supports flexible multitrack recording and stem exporting workflows. Enables deeper integration with external DAWs by providing isolated tracks. Enhances user control over what is captured during live or studio sessions. Examples: Record only drum and bass loops into separate files while excluding vocal loops. Capture just the guitar input along with selected loops for focused editing. Include only bus outputs for stems like “drum bus” or “synth bus”. This summary was automatically generated by ChatGPT-4 on 2025-06-10. Original Post: Sometimes I have unused bus’s and recording them all is wasting space.
1
·
chatgpt-proof-read-done
·
under review
Action to Restore Original Project Tempo (or Use Tempo from Recorded Clip)
Description: Add an action that allows users to return to the original project tempo after using tempo-changing actions (e.g., "Adjust Tempo" or "Nudge"). This could either restore the initial project tempo , or retrieve the tempo from a specific recorded clip . Problem: In live or improvisational setups, tempo is often modified during a session using nudging or adjustment actions. However, there is currently no action to return to the original project tempo , nor a way to reference the tempo of a specific clip. Unless the performer manually remembers or notes down the original tempo , it becomes impossible to return to it accurately — especially when working with non-round values like 104.17 BPM . Proposed Solution: – Add a "Restore Original Project Tempo" action – Alternatively/additionally: add an action like "Set Tempo to Tempo of Clip X" – Optional: display original project tempo in a status bar or allow it to be queried – Optional: expose original tempo as a dynamic value usable in actions Benefits: ✅ Enables reliable return to base tempo during or after performances ✅ Prevents drift and guesswork when tempo is adjusted live ✅ Useful in both live looping and structured project editing ✅ Supports hybrid workflows involving tempo automation and manual changes ✅ Enhances precision and confidence in complex performances This summary was automatically generated by ChatGPT-4 on April 30, 2025.
1
·
chatgpt-proof-read-done
·
under review
Actions: Target "Next Empty Clip" (Audio or MIDI) for Recording/Insertion
Description: Add Actions that automatically target the next available empty clip slot (either audio or MIDI) so users can record or insert content without manually selecting an empty clip each time. This enables fast, repeatable workflows for building takes, layers, or sequences across a grid. Problem: In many looping and sequencing workflows, users repeatedly record new material into successive empty slots: Recording multiple takes (Take 1, Take 2, Take 3…). Building variations across a row/column. Quickly capturing ideas without touching the screen to locate an empty clip. Today, users typically must manually select an empty clip slot before recording or creating a MIDI clip. That adds friction and increases the chance of mistakes (recording over an existing clip, selecting the wrong target) during performance. Proposed Solution: 1) New targeting Actions Provide actions such as: Select Next Empty Audio Clip Select Next Empty MIDI Clip Select Next Empty Clip (any type) Optional variants: Select Previous Empty Audio/MIDI Clip Select Next Empty Clip in Row/Column/Play Group (scope selection) 2) Combined record/create actions (optional but powerful) "Record to Next Empty Audio Clip" (one action: find next empty + arm/record). "Create MIDI Clip in Next Empty Slot" (find next empty + create). 3) Scope rules Allow the user to define the search scope: Current page only Current track/channel only Current row/column Current play group/clip stack Whole project Define search order: Left-to-right, top-to-bottom (or user configurable) Wrap-around behavior (stop at end vs wrap to first) 4) Safety behaviors If no empty clip exists in scope: - do nothing, or - optionally create a new row/clip slot (if the app supports it), or - show a clear notification ("No empty audio clip found in scope"). Never overwrite existing clips unless explicitly requested. Benefits: Faster capture and take-building workflows with fewer taps. Reduced risk of recording over the wrong clip. Makes foot-controller-driven recording more practical (hands-free). Improves creative flow: quickly capture ideas across multiple empty slots. Examples: Take capture: - Footswitch triggers "Record to Next Empty Audio Clip" repeatedly to record multiple takes in adjacent slots. MIDI idea grid: - Button triggers "Create MIDI Clip in Next Empty MIDI Clip" to populate a sequencer grid quickly. Variations row: - Scope set to current row; each press selects the next empty slot so the user builds variations left-to-right. This summary was automatically generated by GPT-5.2 Thinking on 2026-01-09 . Original Post: Target next empty audio clip / next MIDI clip As is, there’s only the option to target the next empty clip, but it would be nice to be able to target specifically only the next empty AUDIO clip.
1
·
chatgpt-proof-read-done
·
under review
Multi-View External Touchscreen Mode (Show 4-6 Independent Screens on One Large Display)
Description: Add an external-display mode that can show multiple independent app views ("screens") simultaneously on one large third-party touchscreen (24" or larger). The goal is to run a single device as the brain, while presenting 4-6 separate, touch-interactive panels on one big screen (e.g., Clips, Mixer, Widgets, Set List, Browser, Actions). Problem: Large performance rigs often need multiple views at the same time: On iPad, users constantly switch pages/panels (Clips <-> Mixer <-> Widgets <-> Browser), which costs time and increases live-risk. Using multiple iPads/iPhones as "visual units" works, but it adds hardware, power, cooling, cabling, and reliability complexity. A large external touchscreen has enough real estate to replace several small screens, but there is no way to display multiple independent views at once (instead of a single mirrored or single-window view). Problems: Excessive view switching during performance increases mistakes and slows operation. Multi-device setups are expensive and fragile (battery, heat, mounts, networking, sync). A single mirrored view on a large display does not solve the "many panels at once" workflow. Proposed Solution: 1) External Display "Multi-View" mode When an external display is connected, offer a mode where the external screen can be split into 2/4/6 panels (user-selectable layouts). Each panel can show an independently chosen view, for example: - Clips page (or a specific page) - Mixer - Widget canvas (a specific widget page) - Project browser / set lists - Actions / sequences monitor - Plugin windows (optional) 2) Touch routing and interaction Touch input on the external touchscreen should interact directly with the panel being touched. Optional "lock panel" to prevent accidental edits (performance safety). 3) Panel assignment and persistence Each panel has a selector to choose its content. Save panel layout per project, or as a global preset ("Stage layout", "Studio layout"). 4) Performance-focused options "Always-on" critical panels (e.g., master meters + CPU/DSP + battery if available). Optional large-font / high-contrast mode for distance viewing. Optional "safe mode" where only whitelisted controls are touchable. 5) Compatibility strategy If iOS limits true multi-window external UI, provide the best feasible approximation: - Dedicated external UI that renders multiple panels while the main device UI remains normal. Graceful fallback to current single-view behavior on unsupported hardware. Benefits: Replaces multiple small devices with one large, touch-capable surface. Dramatically reduces live navigation friction (no constant panel switching). Improves safety and speed: key controls remain visible and reachable at all times. Cleaner stage ergonomics and simplified rig (less power management, fewer mounts/cables). Examples: 6-panel stage layout on a 24" touchscreen: - Top row: Clips (main page), Mixer, Set List - Bottom row: Widgets (performance macros), Actions/Sequences status, Master meters Rehearsal layout: - Large clip grid + mixer + browser, with "lock" enabled for non-critical panels. Busking/compact rig: - 4 panels: Clips, Widgets, Master meters, Quick browser, replacing a multi-phone display setup. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-29 . Original Post: 6 loopy pro screens to appear on one 24” or larger 3rd party touchscreen. Allow 6 loopy pro screens to appear on one 24” or larger 3rd party touchscreen. For immediate and very spontaneous live performance access (quick grap and go on larger format) designed for a quick glance by front man guitarist constantly engaging audience with non stop between songs 45 min set ie ….one screen all drum and drum loop info, one screen prerecorded.mp3 stereo tracks, etc,etc
2
·
chatgpt-proof-read-done
·
under review
Stable ID-Based MIDI Binding for Clips and Widgets
Description: Implement a robust MIDI binding system that links MIDI controls to persistent internal IDs for clips and widgets, instead of to their current index, position, or label. The goal is to keep MIDI mappings stable across layout edits, reordering, and design changes, so controllers remain reliable as projects evolve. Problem: Currently, MIDI bindings appear to be tied to positional or name-based references (for example, "Clip 15" or "Slider 7"). When the user adds, deletes, or reorders clips or widgets, these indices shift, causing existing bindings to: Point to the wrong clip or widget, or Break entirely and require manual rebuilding. This leads to: MIDI mappings becoming unreliable after even minor layout tweaks. Repeatedly having to re-check and rebuild bindings when refining a page or moving UI elements. A strong disincentive to iterate on layouts once MIDI is set up. High risk of failure in live performance, where a moved clip or widget can silently invalidate mappings. Users report that even moving a clip slightly on the canvas can change its internal numbering and break mappings, forcing a tedious verification pass over "each and every binding" after small visual adjustments. It also limits practical reuse of MIDI setups across pages and projects. Proposed Solution: Introduce ID-based object binding for all MIDI mappings: Assign each clip and widget a persistent internal ID (unique object identifier). When a MIDI binding is created, store the binding against that internal ID, not its index, row, column, or label. Ensure that: - Reordering clips or widgets on a page does not affect their MIDI bindings. - Moving a clip or widget between rows or positions on the canvas leaves its bindings intact. - Duplicating a clip or widget (including across pages or projects) offers the option to either: - Copy the bindings along with the object, or - Create a clean copy without bindings (user-selectable behavior). - Only explicit deletion of a clip or widget invalidates its associated bindings. Implementation notes: Provide a safe migration path for existing projects (e.g. converting current index-based bindings to ID-based on load). In the control settings / profiles UI, display the bound target by name and location for user clarity, but internally use the stable ID. Optionally expose a "relink target" function for reassigning a binding to another object without recreating it from scratch. Benefits: MIDI mappings become resilient to layout changes, renaming, and reordering. Users can freely refine pages, move clips a "few centimeters" for better ergonomics, or redesign a performance layout without destroying their control setup. Greatly improved reliability in live contexts, where any unexpected re-mapping is unacceptable. Enables copying individual clips or widgets (with their bindings) across pages and projects as reusable building blocks. Encourages experimentation and modular UI design, fully aligned with Loopy Pro’s flexible canvas concept. Examples: Layout refinement: - A row of clips is moved down to make room for new controls. - With ID-based binding, the same footswitches still trigger the same clips as before, regardless of their new positions. Reusing a performance widget: - A "Record/Play/Stop" button widget with carefully tuned MIDI bindings is copied to a new page or project. - The copy retains its mappings to the intended target clip or role, instead of reverting to default or pointing at the wrong object. Multi-song setups: - A user designs a template page with a grid of clips and a bank of widgets mapped to a MIDI controller. - They duplicate the page for Song 2 and Song 3, adjust clip contents and layout, and all bindings continue to work without manual re-learning. This summary was automatically generated by GPT-5.1 Thinking on 2025-12-27 .
4
·
chatgpt-proof-read-done
·
under review
Multi-Target Morph Pad With Zones, Rails, and High-Resolution Output
Description: Add a new control-surface widget: a "Morph Pad" that can continuously morph (interpolate) between multiple user-defined target states. Each target stores values for many destinations at once (AUv3 parameters, mixer controls, sends, actions, and/or MIDI). The performer moves a cursor (finger) on a 2D pad, and Loopy Pro outputs a smooth blend across all assigned destinations. The key goal: one gesture should reliably and repeatably control many parameters ("XYZABC..."), not just X and Y. Problems: Complex transitions currently require many separate controls (sliders/knobs) or multiple XY pads, which is slow to build and fragile live. Live morphing across many parameters is hard to hit precisely and hard to repeat. Freeform touch control without target/snap logic can cause jitter near boundaries and makes it difficult to land on musically meaningful states. Users who want "morphing" often depend on external apps/controllers, adding routing complexity and failure points. Proposed Solution: 1) Morph core: Targets (the foundation) Allow adding N "Targets" (e.g., 2–16+). Each Target stores a full snapshot of assigned destinations (any number of parameters/controls). During performance, compute weights per Target (distance-based interpolation) and output interpolated values to all destinations in real time. 2) Live-safe precision Optional "Magnet/Snap" to Targets (strength + radius). Optional hysteresis/stability to prevent flicker when hovering near boundaries or between Targets. Optional release behavior: hold last value, return to a default Target, or spring to center. 3) Zones (aligned, performance-oriented) Provide an aligned zone editor (rectangles/polygons with snap-to-grid, numeric sizing/position). Zones serve as: a) Visual overlays (labels) to communicate intent, and/or b) Mapping layers: Zone A morphs parameter set A, Zone B morphs parameter set B. Rationale: aligned zones keep values "anvisierbar" and repeatable under finger performance, while still enabling complex layouts. 4) Rails/Paths (line tool for repeatable morph gestures) Let users define one or more "Rails" (paths). Optional cursor lock-to-rail: the pad behaves like a constrained morph fader along an arbitrary curve. Rails enable stage-proof morphs (Clean -> Big -> Destroyed) with minimal risk of unintended states. 5) Scaling, curves, and limits per destination Per destination: min/max range, curve (linear/exponential/S-curve), invert, smoothing. Optional quantized steps for musically discrete parameters (e.g., rhythmic divisions). 6) High-resolution control output (optional) Internal high-resolution smoothing for AUv3 parameters. Optional high-resolution external MIDI modes (e.g., 14-bit CC via MSB/LSB pairs and/or NRPN) where appropriate. 7) Fast workflow ("build it in minutes") "Add Destination" / learn workflow to capture AUv3 params or internal controls quickly. "Capture Target" button: store current values into selected Target. Copy/paste Targets and mappings, and a clear overview list of all destinations. Benefits: Dramatically reduces UI clutter while increasing expressive control. Enables repeatable, precise morphing between meaningful sound states. Improves reliability for live performance through targets, snap, hysteresis, and rails. Unifies internal control and external MIDI ecosystems without extra routing apps. Examples: FX Morph: one pad morphs Reverb mix, Delay feedback, Filter cutoff, and Drive from "Clean" to "Cinematic" to "Aggressive". Loop Scene Morph: crossfade track groups, adjust send levels, and tweak global FX with one gesture. Safe Rail: a single "Clean -> Big" rail that is easy to hit and repeat under stress. Zone Layers: top half morphs "Ambient FX", bottom half morphs "Rhythmic FX", with identical hand motion producing different musical outcomes depending on region. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-27 .
1
·
chatgpt-proof-read-done
·
under review
Retrospective Loop Waveform Visualization (Pre-Record Buffer + Instant Waveform)
Description: Add a retrospective (pre-record) loop buffer and immediately display a waveform visualization for the captured audio once the loop is created. The goal is to support “I played it already” moments: users can capture the last few seconds/bars of performance and instantly see the waveform for editing and confidence. Problem: In live looping, great moments happen unexpectedly: Users start playing, then realize they want that phrase as a loop after it happened. Without retrospective capture, the moment is lost or must be recreated. Even when audio is captured, lack of immediate waveform feedback makes it harder to confirm what was recorded, where the transient start is, and whether trimming is needed. A retrospective buffer plus waveform display would reduce friction and improve confidence when creating loops from spontaneous performance. Proposed Solution: 1) Retrospective record buffer Maintain a continuous circular buffer per selected input/bus (opt-in to control CPU/memory). Allow “Capture last …” actions: - Last X seconds - Last X beats/bars (tempo-synced), if applicable Optionally provide multiple capture lengths (e.g., 2 bars / 4 bars / 8 bars) as separate actions. 2) Automatic loop creation from buffer When triggered, Loopy creates a new loop/clip from the buffer content. Provide alignment options: - Capture aligned to bar boundaries (quantized) - Capture “as played” (free) Optional transient detection for better start points (nice-to-have). 3) Waveform visualization immediately after capture Once the clip exists, show a waveform view right away: - In the clip view/editor - Or as a temporary overlay preview Include basic markers: - start/end points - loop boundary - playhead 4) Editing integration Quick trim/adjust loop points directly from the waveform view. Optional “fade in/out” for click prevention (if not already present). 5) Performance and quality considerations Configurable buffer length and sources to limit memory/CPU usage. Use low-latency waveform generation (generate a coarse waveform first, refine in background if needed). Benefits: Captures spontaneous musical moments that would otherwise be lost. Faster loop creation: no need to re-play parts just to record them. Immediate visual confirmation improves confidence and reduces re-takes. Waveform-based trimming makes loops cleaner and more musical (better starts/stops). Examples: Spontaneous riff capture: - User plays a great 4-bar phrase, then taps “Capture last 4 bars” to instantly create a loop and see the waveform for quick trimming. Free-time texture: - User improvises an ambient swell, then captures the last 12 seconds and trims visually to a clean loop boundary. Live reliability: - Waveform view immediately shows a clipped transient or off-start point, allowing a quick fix before the loop enters the mix. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-29 . Original Post: When using retrospecive loops it’s hard to know if I covered a whole loop with audio. To visualize the waveform inside the loop before captured would help to know how the final result will be once I close the loop and play it.
2
·
chatgpt-proof-read-done
·
under review
Load More