Jelajahi Sumber

[api][filebrowser] Refactor and improve file upload public API core design (#4188)

## Overview

This commit introduces a new unified file upload REST API that provides a consistent interface for uploading files across multiple cloud storage providers. The implementation replaces the legacy upload handlers with modern, streaming-based handlers that offer improved performance, better error handling, and enhanced security.

## Key Features

### 1. **New REST API Endpoint** 
- **Endpoint**: `/api/v1/storage/upload/file/`
- **Class**: `UploadFileAPI` using Django REST Framework
- **Method**: POST with multipart/form-data support
- **Parameters**:
  - `destination_path` (required): Target path for the upload
  - `overwrite` (optional, default: false): Whether to overwrite existing files

### 2. **Unified Upload Handler Architecture**
- Introduced `get_upload_handler()` method in all filesystem interfaces
- Dynamic handler selection based on storage type and destination path
- Consistent interface across all storage providers

### 3. **Storage Provider Support**
Enhanced upload handlers for all major storage providers:
- **Amazon S3** (`S3NewFileUploadHandler`): Multipart streaming upload
- **Azure Blob Storage** (`ABFSNewFileUploadHandler`): Direct streaming with append operations
- **Google Cloud Storage** (`GSNewFileUploadHandler`): Multipart upload support
- **Apache Ozone** (`OFSNewFileUploadHandler`): Temporary file buffering approach
- **HDFS**: Enhanced with improved chunking support

## Technical Improvements

### Security Enhancements
- **File Extension Validation**: Centralized validation in `is_file_upload_allowed()` function
- **Path Traversal Protection**: Validates filenames don't contain path separators
- **Permission Checks**: Verifies write access before upload initiation
- **Size Limits**: Enforces `MAX_FILE_SIZE_UPLOAD_LIMIT` during streaming

### Performance Optimizations
- **Streaming Uploads**: Direct streaming to cloud storage without full buffering
- **Configurable Chunk Sizes**: Provider-specific optimal chunk sizes
- **Multipart Support**: Parallel part uploads for S3 and GCS
- **Memory Efficiency**: Minimal memory footprint for large file uploads

### Error Handling
- **HTTP Status Codes**: Proper status codes for different error scenarios:
  - 400: Invalid parameters or file extensions
  - 403: Insufficient permissions
  - 404: Destination path not found
  - 409: File exists (when overwrite=false)
  - 413: File size exceeds limit
  - 500: Server errors
- **Detailed Error Messages**: User-friendly error descriptions
- **Automatic Cleanup**: Removes partial uploads on failure

### Validation & Testing
- **Request Validation**: New `UploadFileSerializer` for request parameter validation
- **Comprehensive Test Suite**: 
  - Unit tests for serializer validation
  - API tests with mocked filesystems
  - Edge case handling (empty files, special characters, etc.)
Harsh Gupta 5 bulan lalu
induk
melakukan
6d6025535a

+ 216 - 201
apps/filebrowser/src/filebrowser/api.py

@@ -15,31 +15,34 @@
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
 
 
-import os
 import json
 import json
 import logging
 import logging
-import operator
 import mimetypes
 import mimetypes
+import os
 import posixpath
 import posixpath
-from io import BytesIO as string_io
 from urllib.parse import quote
 from urllib.parse import quote
 
 
 from django.core.files.uploadhandler import StopUpload
 from django.core.files.uploadhandler import StopUpload
 from django.core.paginator import EmptyPage, Paginator
 from django.core.paginator import EmptyPage, Paginator
 from django.http import HttpResponse, HttpResponseNotModified, HttpResponseRedirect, StreamingHttpResponse
 from django.http import HttpResponse, HttpResponseNotModified, HttpResponseRedirect, StreamingHttpResponse
 from django.utils.http import http_date
 from django.utils.http import http_date
-from django.utils.translation import gettext as _
 from django.views.static import was_modified_since
 from django.views.static import was_modified_since
+from rest_framework import status
+from rest_framework.exceptions import NotFound
+from rest_framework.parsers import MultiPartParser
+from rest_framework.response import Response
+from rest_framework.views import APIView
 
 
-from aws.s3.s3fs import S3ListAllBucketsException, get_s3_home_directory
+from aws.s3.s3fs import get_s3_home_directory, S3ListAllBucketsException
 from azure.abfs.__init__ import get_abfs_home_directory
 from azure.abfs.__init__ import get_abfs_home_directory
 from desktop.auth.backend import is_admin
 from desktop.auth.backend import is_admin
 from desktop.conf import TASK_SERVER_V2
 from desktop.conf import TASK_SERVER_V2
 from desktop.lib import fsmanager, i18n
 from desktop.lib import fsmanager, i18n
 from desktop.lib.conf import coerce_bool
 from desktop.lib.conf import coerce_bool
 from desktop.lib.django_util import JsonResponse
 from desktop.lib.django_util import JsonResponse
+from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.export_csvxls import file_reader
 from desktop.lib.export_csvxls import file_reader
-from desktop.lib.fs.gc.gs import GSListAllBucketsException, get_gs_home_directory
+from desktop.lib.fs.gc.gs import get_gs_home_directory, GSListAllBucketsException
 from desktop.lib.fs.ozone.ofs import get_ofs_home_directory
 from desktop.lib.fs.ozone.ofs import get_ofs_home_directory
 from desktop.lib.i18n import smart_str
 from desktop.lib.i18n import smart_str
 from desktop.lib.tasks.compress_files.compress_utils import compress_files_in_hdfs
 from desktop.lib.tasks.compress_files.compress_utils import compress_files_in_hdfs
@@ -47,26 +50,25 @@ from desktop.lib.tasks.extract_archive.extract_utils import extract_archive_in_h
 from filebrowser.conf import (
 from filebrowser.conf import (
   ENABLE_EXTRACT_UPLOADED_ARCHIVE,
   ENABLE_EXTRACT_UPLOADED_ARCHIVE,
   FILE_DOWNLOAD_CACHE_CONTROL,
   FILE_DOWNLOAD_CACHE_CONTROL,
-  MAX_FILE_SIZE_UPLOAD_LIMIT,
   REDIRECT_DOWNLOAD,
   REDIRECT_DOWNLOAD,
   RESTRICT_FILE_EXTENSIONS,
   RESTRICT_FILE_EXTENSIONS,
   SHOW_DOWNLOAD_BUTTON,
   SHOW_DOWNLOAD_BUTTON,
 )
 )
 from filebrowser.lib.rwx import compress_mode, filetype, rwx
 from filebrowser.lib.rwx import compress_mode, filetype, rwx
-from filebrowser.utils import parse_broker_url
+from filebrowser.serializers import UploadFileSerializer
+from filebrowser.utils import get_user_fs, parse_broker_url
 from filebrowser.views import (
 from filebrowser.views import (
-  DEFAULT_CHUNK_SIZE_BYTES,
-  MAX_CHUNK_SIZE_BYTES,
   _can_inline_display,
   _can_inline_display,
   _is_hdfs_superuser,
   _is_hdfs_superuser,
   _normalize_path,
   _normalize_path,
+  DEFAULT_CHUNK_SIZE_BYTES,
   extract_upload_data,
   extract_upload_data,
+  MAX_CHUNK_SIZE_BYTES,
   perform_upload_task,
   perform_upload_task,
   read_contents,
   read_contents,
   stat_absolute_path,
   stat_absolute_path,
 )
 )
-from hadoop.conf import has_hdfs_enabled, is_hdfs_trash_enabled
-from hadoop.core_site import get_trash_interval
+from hadoop.conf import is_hdfs_trash_enabled
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.fsutils import do_overwrite_save
 from hadoop.fs.fsutils import do_overwrite_save
 from useradmin.models import Group, User
 from useradmin.models import Group, User
@@ -80,9 +82,9 @@ def error_handler(view_fn):
     try:
     try:
       return view_fn(*args, **kwargs)
       return view_fn(*args, **kwargs)
     except Exception as e:
     except Exception as e:
-      LOG.exception('Error running %s' % view_fn)
-      response['status'] = -1
-      response['message'] = smart_str(e)
+      LOG.exception("Error running %s" % view_fn)
+      response["status"] = -1
+      response["message"] = smart_str(e)
     return JsonResponse(response)
     return JsonResponse(response)
 
 
   return decorator
   return decorator
@@ -97,8 +99,8 @@ def get_filesystems(request):
   for k in fsmanager.get_filesystems(request.user):
   for k in fsmanager.get_filesystems(request.user):
     filesystems[k] = True
     filesystems[k] = True
 
 
-  response['status'] = 0
-  response['filesystems'] = filesystems
+  response["status"] = 0
+  response["filesystems"] = filesystems
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
@@ -113,8 +115,8 @@ def api_error_handler(view_fn):
     try:
     try:
       return view_fn(*args, **kwargs)
       return view_fn(*args, **kwargs)
     except Exception as e:
     except Exception as e:
-      LOG.exception(f'Error running {view_fn.__name__}: {str(e)}')
-      return JsonResponse({'error': str(e)}, status=500)
+      LOG.exception(f"Error running {view_fn.__name__}: {str(e)}")
+      return JsonResponse({"error": str(e)}, status=500)
 
 
   return decorator
   return decorator
 
 
@@ -125,16 +127,16 @@ def _get_hdfs_home_directory(user):
 
 
 def _get_config(fs, request):
 def _get_config(fs, request):
   config = {}
   config = {}
-  if fs == 'hdfs':
+  if fs == "hdfs":
     is_hdfs_superuser = _is_hdfs_superuser(request)
     is_hdfs_superuser = _is_hdfs_superuser(request)
     config = {
     config = {
-      'is_trash_enabled': is_hdfs_trash_enabled(),
+      "is_trash_enabled": is_hdfs_trash_enabled(),
       # TODO: Check if any of the below fields should be part of new Hue user and group management APIs
       # TODO: Check if any of the below fields should be part of new Hue user and group management APIs
-      'is_hdfs_superuser': is_hdfs_superuser,
-      'groups': [str(x) for x in Group.objects.values_list('name', flat=True)] if is_hdfs_superuser else [],
-      'users': [str(x) for x in User.objects.values_list('username', flat=True)] if is_hdfs_superuser else [],
-      'superuser': request.fs.superuser,
-      'supergroup': request.fs.supergroup,
+      "is_hdfs_superuser": is_hdfs_superuser,
+      "groups": [str(x) for x in Group.objects.values_list("name", flat=True)] if is_hdfs_superuser else [],
+      "users": [str(x) for x in User.objects.values_list("username", flat=True)] if is_hdfs_superuser else [],
+      "superuser": request.fs.superuser,
+      "supergroup": request.fs.supergroup,
     }
     }
   return config
   return config
 
 
@@ -154,11 +156,11 @@ def get_all_filesystems(request):
     JsonResponse: A JSON response containing a list of filesystems with their configurations.
     JsonResponse: A JSON response containing a list of filesystems with their configurations.
   """
   """
   fs_home_dir_mapping = {
   fs_home_dir_mapping = {
-    'hdfs': _get_hdfs_home_directory,
-    's3a': get_s3_home_directory,
-    'gs': get_gs_home_directory,
-    'abfs': get_abfs_home_directory,
-    'ofs': get_ofs_home_directory,
+    "hdfs": _get_hdfs_home_directory,
+    "s3a": get_s3_home_directory,
+    "gs": get_gs_home_directory,
+    "abfs": get_abfs_home_directory,
+    "ofs": get_ofs_home_directory,
   }
   }
 
 
   filesystems = []
   filesystems = []
@@ -166,7 +168,7 @@ def get_all_filesystems(request):
     user_home_dir = fs_home_dir_mapping[fs](request.user)
     user_home_dir = fs_home_dir_mapping[fs](request.user)
     config = _get_config(fs, request)
     config = _get_config(fs, request)
 
 
-    filesystems.append({'name': fs, 'user_home_directory': user_home_dir, 'config': config})
+    filesystems.append({"name": fs, "user_home_directory": user_home_dir, "config": config})
 
 
   return JsonResponse(filesystems, safe=False)
   return JsonResponse(filesystems, safe=False)
 
 
@@ -184,21 +186,21 @@ def download(request):
   Returns:
   Returns:
     A response object with the file contents or an error message
     A response object with the file contents or an error message
   """
   """
-  path = request.GET.get('path')
+  path = request.GET.get("path")
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
   if not SHOW_DOWNLOAD_BUTTON.get():
   if not SHOW_DOWNLOAD_BUTTON.get():
-    return HttpResponse('Download operation is not allowed.', status=403)
+    return HttpResponse("Download operation is not allowed.", status=403)
 
 
   if not request.fs.exists(path):
   if not request.fs.exists(path):
-    return HttpResponse(f'File does not exist: {path}', status=404)
+    return HttpResponse(f"File does not exist: {path}", status=404)
 
 
   if not request.fs.isfile(path):
   if not request.fs.isfile(path):
-    return HttpResponse(f'{path} is not a file.', status=400)
+    return HttpResponse(f"{path} is not a file.", status=400)
 
 
-  content_type = mimetypes.guess_type(path)[0] or 'application/octet-stream'
+  content_type = mimetypes.guess_type(path)[0] or "application/octet-stream"
   stats = request.fs.stats(path)
   stats = request.fs.stats(path)
-  if not was_modified_since(request.META.get('HTTP_IF_MODIFIED_SINCE'), stats['mtime']):
+  if not was_modified_since(request.META.get("HTTP_IF_MODIFIED_SINCE"), stats["mtime"]):
     return HttpResponseNotModified()
     return HttpResponseNotModified()
 
 
   fh = request.fs.open(path)
   fh = request.fs.open(path)
@@ -208,48 +210,48 @@ def download(request):
     request.fs.read(path, offset=0, length=1)
     request.fs.read(path, offset=0, length=1)
   except WebHdfsException as e:
   except WebHdfsException as e:
     if e.code == 403:
     if e.code == 403:
-      return HttpResponse(f'User {request.user.username} is not authorized to download file at path: {path}', status=403)
-    elif request.fs._get_scheme(path).lower() == 'abfs' and e.code == 416:
+      return HttpResponse(f"User {request.user.username} is not authorized to download file at path: {path}", status=403)
+    elif request.fs._get_scheme(path).lower() == "abfs" and e.code == 416:
       # Safe to skip ABFS exception of code 416 for zero length objects, file will get downloaded anyway.
       # Safe to skip ABFS exception of code 416 for zero length objects, file will get downloaded anyway.
-      LOG.debug('Skipping exception from ABFS:' + str(e))
+      LOG.debug("Skipping exception from ABFS:" + str(e))
     else:
     else:
-      return HttpResponse(f'Failed to download file at path {path}: {str(e)}', status=500)  # TODO: status code?
+      return HttpResponse(f"Failed to download file at path {path}: {str(e)}", status=500)  # TODO: status code?
 
 
-  if REDIRECT_DOWNLOAD.get() and hasattr(fh, 'read_url'):
+  if REDIRECT_DOWNLOAD.get() and hasattr(fh, "read_url"):
     response = HttpResponseRedirect(fh.read_url())
     response = HttpResponseRedirect(fh.read_url())
-    setattr(response, 'redirect_override', True)
+    setattr(response, "redirect_override", True)
   else:
   else:
     response = StreamingHttpResponse(file_reader(fh), content_type=content_type)
     response = StreamingHttpResponse(file_reader(fh), content_type=content_type)
 
 
     content_disposition = (
     content_disposition = (
-      request.GET.get('disposition') if request.GET.get('disposition') == 'inline' and _can_inline_display(path) else 'attachment'
+      request.GET.get("disposition") if request.GET.get("disposition") == "inline" and _can_inline_display(path) else "attachment"
     )
     )
 
 
     # Extract filename for HDFS and OFS for now because the path stats object has a bug in fetching name field
     # Extract filename for HDFS and OFS for now because the path stats object has a bug in fetching name field
     # TODO: Fix this super old bug when refactoring the underlying HDFS filesystem code
     # TODO: Fix this super old bug when refactoring the underlying HDFS filesystem code
-    filename = os.path.basename(path) if request.fs._get_scheme(path).lower() in ('hdfs', 'ofs') else stats['name']
+    filename = os.path.basename(path) if request.fs._get_scheme(path).lower() in ("hdfs", "ofs") else stats["name"]
 
 
     # Set the filename in the Content-Disposition header with proper encoding for special characters
     # Set the filename in the Content-Disposition header with proper encoding for special characters
     encoded_filename = quote(filename)
     encoded_filename = quote(filename)
-    response['Content-Disposition'] = f"{content_disposition}; filename*=UTF-8\'\'{encoded_filename}"
+    response["Content-Disposition"] = f"{content_disposition}; filename*=UTF-8''{encoded_filename}"
 
 
-    response["Last-Modified"] = http_date(stats['mtime'])
-    response["Content-Length"] = stats['size']
+    response["Last-Modified"] = http_date(stats["mtime"])
+    response["Content-Length"] = stats["size"]
 
 
     if FILE_DOWNLOAD_CACHE_CONTROL.get():
     if FILE_DOWNLOAD_CACHE_CONTROL.get():
       response["Cache-Control"] = FILE_DOWNLOAD_CACHE_CONTROL.get()
       response["Cache-Control"] = FILE_DOWNLOAD_CACHE_CONTROL.get()
 
 
   request.audit = {
   request.audit = {
-    'operation': 'DOWNLOAD',
-    'operationText': 'User %s downloaded file at path "%s"' % (request.user.username, path),
-    'allowed': True,
+    "operation": "DOWNLOAD",
+    "operationText": 'User %s downloaded file at path "%s"' % (request.user.username, path),
+    "allowed": True,
   }
   }
 
 
   return response
   return response
 
 
 
 
 def _massage_page(page, paginator):
 def _massage_page(page, paginator):
-  return {'page_number': page.number, 'page_size': paginator.per_page, 'total_pages': paginator.num_pages, 'total_size': paginator.count}
+  return {"page_number": page.number, "page_size": paginator.per_page, "total_pages": paginator.num_pages, "total_size": paginator.count}
 
 
 
 
 @api_error_handler
 @api_error_handler
@@ -271,21 +273,21 @@ def listdir_paged(request):
   Raises:
   Raises:
     HttpResponse: With appropriate status codes for errors.
     HttpResponse: With appropriate status codes for errors.
   """
   """
-  path = request.GET.get('path', '/')  # Set default path for index directory
+  path = request.GET.get("path", "/")  # Set default path for index directory
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
   if not request.fs.isdir(path):
   if not request.fs.isdir(path):
-    return HttpResponse(f'{path} is not a directory.', status=400)
+    return HttpResponse(f"{path} is not a directory.", status=400)
 
 
   # Extract pagination parameters
   # Extract pagination parameters
-  pagenum = int(request.GET.get('pagenum', 1))
-  pagesize = int(request.GET.get('pagesize', 30))
+  pagenum = int(request.GET.get("pagenum", 1))
+  pagesize = int(request.GET.get("pagesize", 30))
 
 
   # Determine if operation should be performed as another user
   # Determine if operation should be performed as another user
   do_as = None
   do_as = None
   if is_admin(request.user) or request.user.has_hue_permission(action="impersonate", app="security"):
   if is_admin(request.user) or request.user.has_hue_permission(action="impersonate", app="security"):
-    do_as = request.GET.get('doas', request.user.username)
-  if hasattr(request, 'doas'):
+    do_as = request.GET.get("doas", request.user.username)
+  if hasattr(request, "doas"):
     do_as = request.doas
     do_as = request.doas
 
 
   # Get stats for all files in the directory
   # Get stats for all files in the directory
@@ -295,22 +297,22 @@ def listdir_paged(request):
     else:
     else:
       all_stats = request.fs.listdir_stats(path)
       all_stats = request.fs.listdir_stats(path)
   except (S3ListAllBucketsException, GSListAllBucketsException) as e:
   except (S3ListAllBucketsException, GSListAllBucketsException) as e:
-    return HttpResponse(f'Bucket listing is not allowed: {e}', status=403)
+    return HttpResponse(f"Bucket listing is not allowed: {e}", status=403)
 
 
   # Apply filter first if specified
   # Apply filter first if specified
-  filter_string = request.GET.get('filter')
+  filter_string = request.GET.get("filter")
   if filter_string:
   if filter_string:
-    all_stats = [sb for sb in all_stats if filter_string in sb['name']]
+    all_stats = [sb for sb in all_stats if filter_string in sb["name"]]
 
 
   # Next, sort with proper handling of None values
   # Next, sort with proper handling of None values
-  sortby = request.GET.get('sortby', 'name')
-  descending = coerce_bool(request.GET.get('descending', False))
-  valid_sort_fields = {'type', 'name', 'atime', 'mtime', 'user', 'group', 'size'}
+  sortby = request.GET.get("sortby", "name")
+  descending = coerce_bool(request.GET.get("descending", False))
+  valid_sort_fields = {"type", "name", "atime", "mtime", "user", "group", "size"}
 
 
   if sortby not in valid_sort_fields:
   if sortby not in valid_sort_fields:
     LOG.info(f"Ignoring invalid sort attribute '{sortby}' for list directory operation.")
     LOG.info(f"Ignoring invalid sort attribute '{sortby}' for list directory operation.")
   else:
   else:
-    numeric_fields = {'size', 'atime', 'mtime'}
+    numeric_fields = {"size", "atime", "mtime"}
 
 
     def sorting_key(item):
     def sorting_key(item):
       """Generate a sorting key that handles None values for different field types."""
       """Generate a sorting key that handles None values for different field types."""
@@ -320,7 +322,7 @@ def listdir_paged(request):
         return 0 if value is None else value
         return 0 if value is None else value
       else:
       else:
         # Treat None as an empty string for non-numeric fields
         # Treat None as an empty string for non-numeric fields
-        return '' if value is None else value
+        return "" if value is None else value
 
 
     try:
     try:
       all_stats = sorted(all_stats, key=sorting_key, reverse=descending)
       all_stats = sorted(all_stats, key=sorting_key, reverse=descending)
@@ -341,7 +343,7 @@ def listdir_paged(request):
   if page:
   if page:
     page.object_list = [_massage_stats(request, stat_absolute_path(path, s)) for s in shown_stats]
     page.object_list = [_massage_stats(request, stat_absolute_path(path, s)) for s in shown_stats]
 
 
-  response = {'files': page.object_list if page else [], 'page': _massage_page(page, paginator) if page else {}}
+  response = {"files": page.object_list if page else [], "page": _massage_page(page, paginator) if page else {}}
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
@@ -356,13 +358,13 @@ def display(request):
 
 
   Note that display by length and offset are on bytes, not on characters.
   Note that display by length and offset are on bytes, not on characters.
   """
   """
-  path = request.GET.get('path', '/')  # Set default path for index directory
+  path = request.GET.get("path", "/")  # Set default path for index directory
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
   if not request.fs.isfile(path):
   if not request.fs.isfile(path):
-    return HttpResponse(f'{path} is not a file.', status=400)
+    return HttpResponse(f"{path} is not a file.", status=400)
 
 
-  encoding = request.GET.get('encoding') or i18n.get_site_encoding()
+  encoding = request.GET.get("encoding") or i18n.get_site_encoding()
 
 
   # Need to deal with possibility that length is not present
   # Need to deal with possibility that length is not present
   # because the offset came in via the toolbar manual byte entry.
   # because the offset came in via the toolbar manual byte entry.
@@ -388,14 +390,14 @@ def display(request):
   mode = request.GET.get("mode")
   mode = request.GET.get("mode")
   compression = request.GET.get("compression")
   compression = request.GET.get("compression")
 
 
-  if mode and mode != 'text':
+  if mode and mode != "text":
     return HttpResponse("Mode value must be 'text'.", status=400)
     return HttpResponse("Mode value must be 'text'.", status=400)
   if offset < 0:
   if offset < 0:
     return HttpResponse("Offset may not be less than zero.", status=400)
     return HttpResponse("Offset may not be less than zero.", status=400)
   if length < 0:
   if length < 0:
     return HttpResponse("Length may not be less than zero.", status=400)
     return HttpResponse("Length may not be less than zero.", status=400)
   if length > MAX_CHUNK_SIZE_BYTES:
   if length > MAX_CHUNK_SIZE_BYTES:
-    return HttpResponse(f'Cannot request chunks greater than {MAX_CHUNK_SIZE_BYTES} bytes.', status=400)
+    return HttpResponse(f"Cannot request chunks greater than {MAX_CHUNK_SIZE_BYTES} bytes.", status=400)
 
 
   # Read out based on meta.
   # Read out based on meta.
   _, offset, length, contents = read_contents(compression, path, request.fs, offset, length)
   _, offset, length, contents = read_contents(compression, path, request.fs, offset, length)
@@ -404,21 +406,21 @@ def display(request):
   file_contents = None
   file_contents = None
   if isinstance(contents, str):
   if isinstance(contents, str):
     file_contents = contents
     file_contents = contents
-    mode = 'text'
+    mode = "text"
   else:
   else:
     try:
     try:
       file_contents = contents.decode(encoding)
       file_contents = contents.decode(encoding)
-      mode = 'text'
+      mode = "text"
     except UnicodeDecodeError:
     except UnicodeDecodeError:
       LOG.error("Cannot decode file contents with encoding: %s." % encoding)
       LOG.error("Cannot decode file contents with encoding: %s." % encoding)
       return HttpResponse("Cannot display file content. Please download the file instead.", status=422)
       return HttpResponse("Cannot display file content. Please download the file instead.", status=422)
 
 
   data = {
   data = {
-    'contents': file_contents,
-    'offset': offset,
-    'length': length,
-    'end': offset + len(contents),
-    'mode': mode,
+    "contents": file_contents,
+    "offset": offset,
+    "length": length,
+    "end": offset + len(contents),
+    "mode": mode,
   }
   }
 
 
   return JsonResponse(data)
   return JsonResponse(data)
@@ -429,11 +431,11 @@ def stat(request):
   """
   """
   Returns the generic stats of FS object.
   Returns the generic stats of FS object.
   """
   """
-  path = request.GET.get('path')
+  path = request.GET.get("path")
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
   if not request.fs.exists(path):
   if not request.fs.exists(path):
-    return HttpResponse(f'Object does not exist: {path}', status=404)
+    return HttpResponse(f"Object does not exist: {path}", status=404)
 
 
   stats = request.fs.stats(path)
   stats = request.fs.stats(path)
 
 
@@ -472,14 +474,14 @@ def upload_chunks(request):
     for _ in request.FILES.values():
     for _ in request.FILES.values():
       pass
       pass
   except StopUpload as e:
   except StopUpload as e:
-    error_message = 'Error occurred during chunk file upload.'
-    LOG.error(f'{error_message} {str(e)}')
+    error_message = "Error occurred during chunk file upload."
+    LOG.error(f"{error_message} {str(e)}")
     return HttpResponse(error_message, status=500)
     return HttpResponse(error_message, status=500)
 
 
   # Check if the file is larger than the single chunk size
   # Check if the file is larger than the single chunk size
   total_parts = int(request.GET.get("qqtotalparts", 0))
   total_parts = int(request.GET.get("qqtotalparts", 0))
   if total_parts > 0:
   if total_parts > 0:
-    return JsonResponse({'uuid': request.GET.get('qquuid')})
+    return JsonResponse({"uuid": request.GET.get("qquuid")})
 
 
   # Check if the file is smaller than the chunk size
   # Check if the file is smaller than the chunk size
   elif total_parts == 0:
   elif total_parts == 0:
@@ -489,8 +491,8 @@ def upload_chunks(request):
       return JsonResponse(response)
       return JsonResponse(response)
 
 
     except Exception as e:
     except Exception as e:
-      error_message = 'Error occurred during chunk file upload.'
-      LOG.error(f'{error_message} {str(e)}')
+      error_message = "Error occurred during chunk file upload."
+      LOG.error(f"{error_message} {str(e)}")
       return HttpResponse(error_message, status=500)
       return HttpResponse(error_message, status=500)
 
 
 
 
@@ -511,67 +513,80 @@ def upload_complete(request):
 
 
     return JsonResponse(response)
     return JsonResponse(response)
   except Exception as e:
   except Exception as e:
-    error_message = 'Error occurred during chunk file upload completion.'
-    LOG.error(f'{error_message} {str(e)}')
+    error_message = "Error occurred during chunk file upload completion."
+    LOG.error(f"{error_message} {str(e)}")
     return HttpResponse(error_message, status=500)
     return HttpResponse(error_message, status=500)
 
 
 
 
-@api_error_handler
-def upload_file(request):
-  # Read request body first to prevent RawPostDataException later on which occurs when trying to access body after it has already been read
-  body_data_bytes = string_io(request.body)
-
-  uploaded_file = request.FILES['file']
-  dest_path = request.POST.get('destination_path')
-  overwrite = coerce_bool(request.POST.get('overwrite', False))
-
-  # Check if the file type is restricted
-  _, file_type = os.path.splitext(uploaded_file.name)
-  if RESTRICT_FILE_EXTENSIONS.get() and file_type.lower() in [ext.lower() for ext in RESTRICT_FILE_EXTENSIONS.get()]:
-    return HttpResponse(f'Uploading files with type "{file_type}" is not allowed. Hue is configured to restrict this type.', status=400)
-
-  # Check if the file size exceeds the maximum allowed size
-  max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
-  if max_size >= 0 and uploaded_file.size >= max_size:
-    return HttpResponse(
-      f'File exceeds maximum allowed size of {max_size} bytes. Hue is configured to restrict uploads larger than this limit.', status=413
-    )
+class UploadFileAPI(APIView):
+  parser_classes = [MultiPartParser]
 
 
-  # Check if the destination path is a directory and the file name contains a path separator
-  # This prevents directory traversal attacks
-  if request.fs.isdir(dest_path) and posixpath.sep in uploaded_file.name:
-    return HttpResponse(f'Invalid filename. Path separators are not allowed.', status=400)
-
-  # Check if the file already exists at the destination path
-  filepath = request.fs.join(dest_path, uploaded_file.name)
-  if request.fs.exists(filepath):
-    # If overwrite is true, attempt to remove the existing file
-    if overwrite:
-      try:
-        request.fs.rmtree(filepath)
-      except Exception as e:
-        err_message = 'Failed to remove already existing file.'
-        LOG.exception(f'{err_message} {str(e)}')
-        return HttpResponse(err_message, status=500)
-    else:
-      err_message = f'The file {uploaded_file.name} already exists at the destination path.'
-      LOG.error(err_message)
-      return HttpResponse(err_message, status=409)
+  def initial(self, request, *args, **kwargs):
+    """Dynamically select and set the upload handler.
 
 
-  # Check if the destination path already exists or not
-  if not request.fs.exists(dest_path):
-    return HttpResponse(f'The destination path {dest_path} does not exist.', status=404)
+    This method is called before the upload handler is used.
+    It sets the upload handler for the request.
+    """
+    LOG.info(f"UploadFileAPI.initial called by user: {request.user.username}")
 
 
-  try:
-    request.fs.upload_v1(request.META, input_data=body_data_bytes, destination=dest_path, username=request.user.username)
-  except Exception as ex:
-    return HttpResponse(f'Upload to {filepath} failed: {str(ex)}', status=500)
+    try:
+      # Validate and parse request parameters
+      serializer = UploadFileSerializer(data=request.query_params)
+      serializer.is_valid(raise_exception=True)
 
 
-  response = {
-    'uploaded_file_stats': _massage_stats(request, stat_absolute_path(filepath, request.fs.stats(filepath))),
-  }
+      destination_path = serializer.validated_data["destination_path"]
+      overwrite = serializer.validated_data["overwrite"]
 
 
-  return JsonResponse(response)
+      LOG.debug(f"Upload request - destination: {destination_path}, overwrite: {overwrite}")
+
+      username = request.user.username
+      fs = get_user_fs(username)
+
+      LOG.debug(f"Retrieved filesystem for user: {username}")
+
+      # Get the appropriate upload handler
+      upload_handler = fs.get_upload_handler(destination_path, overwrite)
+      if not upload_handler:
+        LOG.error(f"No upload handler found for path: {destination_path}")
+        raise NotFound({"error": f"No supported upload handler found for path: {destination_path}"})
+
+      LOG.info(f"Selected upload handler: {upload_handler.__class__.__name__} for destination: {destination_path}")
+      request.upload_handlers = [upload_handler]
+
+      super().initial(request, *args, **kwargs)
+
+    except Exception as e:
+      LOG.error(f"Error in UploadFileAPI.initial: {e}")
+      raise
+
+  def post(self, request, *args, **kwargs):
+    """Handles the file upload response after the upload handler has done its work.
+
+    This method is called after the upload handler has uploaded the file.
+    request.FILES now contains the metadata dict returned by the upload handler.
+    """
+    try:
+      uploaded_file = request.FILES.get("file")
+
+      LOG.debug(f"Retrieved uploaded file metadata: {type(uploaded_file)}")
+
+      if not isinstance(uploaded_file, dict):
+        LOG.error(f"Invalid upload response - expected dict, got: {type(uploaded_file)}")
+        return Response(
+          {"error": "File upload failed or was not handled correctly by the upload handler."}, status=status.HTTP_400_BAD_REQUEST
+        )
+
+      LOG.info("File upload completed successfully")
+      response_data = {"file_stats": uploaded_file}
+
+      return Response(response_data, status=status.HTTP_201_CREATED)
+
+    except PopupException as e:
+      LOG.exception(f"Upload failed with PopupException: {e.message} (code: {e.error_code})")
+      return Response({"error": e.message}, status=e.error_code)
+    except Exception as e:
+      LOG.exception(f"Unexpected error in UploadFileAPI.post: {e}")
+      return Response({"error": "An unexpected error occurred while uploading the file."}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
 
 
 
 
 @api_error_handler
 @api_error_handler
@@ -586,8 +601,8 @@ def mkdir(request):
     A HttpResponse with a status code and message indicating the success or failure of the directory creation.
     A HttpResponse with a status code and message indicating the success or failure of the directory creation.
   """
   """
   # TODO: Check if this needs to be a PUT request
   # TODO: Check if this needs to be a PUT request
-  path = request.POST.get('path')
-  name = request.POST.get('name')
+  path = request.POST.get("path")
+  name = request.POST.get("name")
 
 
   # Check if path and name are provided
   # Check if path and name are provided
   if not path or not name:
   if not path or not name:
@@ -595,7 +610,7 @@ def mkdir(request):
 
 
   # Validate the 'name' parameter for invalid characters
   # Validate the 'name' parameter for invalid characters
   if posixpath.sep in name or "#" in name:
   if posixpath.sep in name or "#" in name:
-    return HttpResponse(f"Slashes or hashes are not allowed in directory name. Please choose a different name.", status=400)
+    return HttpResponse("Slashes or hashes are not allowed in directory name. Please choose a different name.", status=400)
 
 
   dir_path = request.fs.join(path, name)
   dir_path = request.fs.join(path, name)
 
 
@@ -609,8 +624,8 @@ def mkdir(request):
 
 
 @api_error_handler
 @api_error_handler
 def touch(request):
 def touch(request):
-  path = request.POST.get('path')
-  name = request.POST.get('name')
+  path = request.POST.get("path")
+  name = request.POST.get("name")
 
 
   # Check if path and name are provided
   # Check if path and name are provided
   if not path or not name:
   if not path or not name:
@@ -618,7 +633,7 @@ def touch(request):
 
 
   # Validate the 'name' parameter for invalid characters
   # Validate the 'name' parameter for invalid characters
   if name and (posixpath.sep in name):
   if name and (posixpath.sep in name):
-    return HttpResponse(f"Slashes are not allowed in filename. Please choose a different name.", status=400)
+    return HttpResponse("Slashes are not allowed in filename. Please choose a different name.", status=400)
 
 
   file_path = request.fs.join(path, name)
   file_path = request.fs.join(path, name)
 
 
@@ -637,11 +652,11 @@ def save_file(request):
 
 
   Does the save and then redirects back to the edit page.
   Does the save and then redirects back to the edit page.
   """
   """
-  path = request.POST.get('path')
+  path = request.POST.get("path")
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
-  encoding = request.POST.get('encoding')
-  data = request.POST.get('contents').encode(encoding)
+  encoding = request.POST.get("encoding")
+  data = request.POST.get("contents").encode(encoding)
 
 
   if not path:
   if not path:
     return HttpResponse("Path parameter is required for saving the file.", status=400)
     return HttpResponse("Path parameter is required for saving the file.", status=400)
@@ -660,8 +675,8 @@ def save_file(request):
 
 
 @api_error_handler
 @api_error_handler
 def rename(request):
 def rename(request):
-  source_path = request.POST.get('source_path', '')
-  destination_path = request.POST.get('destination_path', '')
+  source_path = request.POST.get("source_path", "")
+  destination_path = request.POST.get("destination_path", "")
 
 
   # Check if source and destination paths are provided
   # Check if source and destination paths are provided
   if not source_path or not destination_path:
   if not source_path or not destination_path:
@@ -707,23 +722,23 @@ def _validate_copy_move_operation(request, source_path, destination_path):
 
 
   # Check if paths are identical
   # Check if paths are identical
   if request.fs.normpath(source_path) == request.fs.normpath(destination_path):
   if request.fs.normpath(source_path) == request.fs.normpath(destination_path):
-    return HttpResponse('Source and destination paths must be different.', status=400)
+    return HttpResponse("Source and destination paths must be different.", status=400)
 
 
   # Verify source path exists
   # Verify source path exists
   if not request.fs.exists(source_path):
   if not request.fs.exists(source_path):
-    return HttpResponse('Source file or folder does not exist.', status=404)
+    return HttpResponse("Source file or folder does not exist.", status=404)
 
 
   # Check if the destination path is a directory
   # Check if the destination path is a directory
   if not request.fs.isdir(destination_path):
   if not request.fs.isdir(destination_path):
-    return HttpResponse('Destination path must be a directory.', status=400)
+    return HttpResponse("Destination path must be a directory.", status=400)
 
 
   # Check if destination path is parent of source path
   # Check if destination path is parent of source path
   if _is_destination_parent_of_source(request, source_path, destination_path):
   if _is_destination_parent_of_source(request, source_path, destination_path):
-    return HttpResponse('Destination cannot be the parent directory of source.', status=400)
+    return HttpResponse("Destination cannot be the parent directory of source.", status=400)
 
 
   # Check if file or folder already exists at destination path
   # Check if file or folder already exists at destination path
   if request.fs.exists(request.fs.join(destination_path, os.path.basename(source_path))):
   if request.fs.exists(request.fs.join(destination_path, os.path.basename(source_path))):
-    return HttpResponse('File or folder already exists at destination path.', status=409)
+    return HttpResponse("File or folder already exists at destination path.", status=409)
 
 
 
 
 @api_error_handler
 @api_error_handler
@@ -737,8 +752,8 @@ def move(request):
   Returns:
   Returns:
     Success or error response with appropriate status codes
     Success or error response with appropriate status codes
   """
   """
-  source_path = request.POST.get('source_path', '')
-  destination_path = request.POST.get('destination_path', '')
+  source_path = request.POST.get("source_path", "")
+  destination_path = request.POST.get("destination_path", "")
 
 
   # Validate the operation and return error response if any scenario fails
   # Validate the operation and return error response if any scenario fails
   validation_response = _validate_copy_move_operation(request, source_path, destination_path)
   validation_response = _validate_copy_move_operation(request, source_path, destination_path)
@@ -760,8 +775,8 @@ def copy(request):
   Returns:
   Returns:
     Success or error response with appropriate status codes
     Success or error response with appropriate status codes
   """
   """
-  source_path = request.POST.get('source_path', '')
-  destination_path = request.POST.get('destination_path', '')
+  source_path = request.POST.get("source_path", "")
+  destination_path = request.POST.get("destination_path", "")
 
 
   # Validate the operation and return error response if any scenario fails
   # Validate the operation and return error response if any scenario fails
   validation_response = _validate_copy_move_operation(request, source_path, destination_path)
   validation_response = _validate_copy_move_operation(request, source_path, destination_path)
@@ -769,10 +784,10 @@ def copy(request):
     return validation_response
     return validation_response
 
 
   # Copy method for Ozone FS returns a string of skipped files if their size is greater than configured chunk size.
   # Copy method for Ozone FS returns a string of skipped files if their size is greater than configured chunk size.
-  if source_path.startswith('ofs://'):
+  if source_path.startswith("ofs://"):
     ofs_skip_files = request.fs.copy(source_path, destination_path, recursive=True, owner=request.user)
     ofs_skip_files = request.fs.copy(source_path, destination_path, recursive=True, owner=request.user)
     if ofs_skip_files:
     if ofs_skip_files:
-      return JsonResponse({'skipped_files': ofs_skip_files}, status=500)  # TODO: Status code?
+      return JsonResponse({"skipped_files": ofs_skip_files}, status=500)  # TODO: Status code?
   else:
   else:
     request.fs.copy(source_path, destination_path, recursive=True, owner=request.user)
     request.fs.copy(source_path, destination_path, recursive=True, owner=request.user)
 
 
@@ -781,24 +796,24 @@ def copy(request):
 
 
 @api_error_handler
 @api_error_handler
 def content_summary(request):
 def content_summary(request):
-  path = request.GET.get('path')
+  path = request.GET.get("path")
   path = _normalize_path(path)
   path = _normalize_path(path)
 
 
   if not path:
   if not path:
     return HttpResponse("Path parameter is required to fetch content summary.", status=400)
     return HttpResponse("Path parameter is required to fetch content summary.", status=400)
 
 
   if not request.fs.exists(path):
   if not request.fs.exists(path):
-    return HttpResponse(f'Path does not exist: {path}', status=404)
+    return HttpResponse(f"Path does not exist: {path}", status=404)
 
 
   response = {}
   response = {}
   try:
   try:
     content_summary = request.fs.get_content_summary(path)
     content_summary = request.fs.get_content_summary(path)
-    replication_factor = request.fs.stats(path)['replication']
+    replication_factor = request.fs.stats(path)["replication"]
 
 
-    content_summary.summary.update({'replication': replication_factor})
+    content_summary.summary.update({"replication": replication_factor})
     response = content_summary.summary
     response = content_summary.summary
   except Exception:
   except Exception:
-    return HttpResponse(f'Failed to fetch content summary for path: {path}', status=500)
+    return HttpResponse(f"Failed to fetch content summary for path: {path}", status=500)
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
@@ -806,8 +821,8 @@ def content_summary(request):
 @api_error_handler
 @api_error_handler
 def set_replication(request):
 def set_replication(request):
   # TODO: Check if this needs to be a PUT request
   # TODO: Check if this needs to be a PUT request
-  path = request.POST.get('path')
-  replication_factor = request.POST.get('replication_factor')
+  path = request.POST.get("path")
+  replication_factor = request.POST.get("replication_factor")
 
 
   result = request.fs.set_replication(path, replication_factor)
   result = request.fs.set_replication(path, replication_factor)
   if not result:
   if not result:
@@ -819,8 +834,8 @@ def set_replication(request):
 @api_error_handler
 @api_error_handler
 def rmtree(request):
 def rmtree(request):
   # TODO: Check if this needs to be a DELETE request
   # TODO: Check if this needs to be a DELETE request
-  path = request.POST.get('path')
-  skip_trash = coerce_bool(request.POST.get('skip_trash', False))
+  path = request.POST.get("path")
+  skip_trash = coerce_bool(request.POST.get("skip_trash", False))
 
 
   request.fs.rmtree(path, skip_trash)
   request.fs.rmtree(path, skip_trash)
 
 
@@ -829,26 +844,26 @@ def rmtree(request):
 
 
 @api_error_handler
 @api_error_handler
 def get_trash_path(request):
 def get_trash_path(request):
-  path = request.GET.get('path')
+  path = request.GET.get("path")
   path = _normalize_path(path)
   path = _normalize_path(path)
   response = {}
   response = {}
 
 
   trash_path = request.fs.trash_path(path)
   trash_path = request.fs.trash_path(path)
-  user_home_trash_path = request.fs.join(request.fs.current_trash_path(trash_path), request.user.get_home_directory().lstrip('/'))
+  user_home_trash_path = request.fs.join(request.fs.current_trash_path(trash_path), request.user.get_home_directory().lstrip("/"))
 
 
   if request.fs.isdir(user_home_trash_path):
   if request.fs.isdir(user_home_trash_path):
-    response['trash_path'] = user_home_trash_path
+    response["trash_path"] = user_home_trash_path
   elif request.fs.isdir(trash_path):
   elif request.fs.isdir(trash_path):
-    response['trash_path'] = trash_path
+    response["trash_path"] = trash_path
   else:
   else:
-    response['trash_path'] = None
+    response["trash_path"] = None
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
 
 
 @api_error_handler
 @api_error_handler
 def trash_restore(request):
 def trash_restore(request):
-  path = request.POST.get('path')
+  path = request.POST.get("path")
   request.fs.restore(path)
   request.fs.restore(path)
 
 
   return HttpResponse(status=200)
   return HttpResponse(status=200)
@@ -864,10 +879,10 @@ def trash_purge(request):
 @api_error_handler
 @api_error_handler
 def chown(request):
 def chown(request):
   # TODO: Check if this needs to be a PUT request
   # TODO: Check if this needs to be a PUT request
-  path = request.POST.get('path')
+  path = request.POST.get("path")
   user = request.POST.get("user")
   user = request.POST.get("user")
   group = request.POST.get("group")
   group = request.POST.get("group")
-  recursive = coerce_bool(request.POST.get('recursive', False))
+  recursive = coerce_bool(request.POST.get("recursive", False))
 
 
   # TODO: Check if we need to explicitly handle encoding anywhere
   # TODO: Check if we need to explicitly handle encoding anywhere
   request.fs.chown(path, user, group, recursive=recursive)
   request.fs.chown(path, user, group, recursive=recursive)
@@ -891,12 +906,12 @@ def chmod(request):
     "other_execute",
     "other_execute",
     "sticky",
     "sticky",
   )
   )
-  path = request.POST.get('path')
-  permission = json.loads(request.POST.get("permission", '{}'))
+  path = request.POST.get("path")
+  permission = json.loads(request.POST.get("permission", "{}"))
 
 
   mode = compress_mode([coerce_bool(permission.get(p)) for p in perm_names])
   mode = compress_mode([coerce_bool(permission.get(p)) for p in perm_names])
 
 
-  request.fs.chmod(path, mode, recursive=coerce_bool(permission.get('recursive', False)))
+  request.fs.chmod(path, mode, recursive=coerce_bool(permission.get("recursive", False)))
 
 
   return HttpResponse(status=200)
   return HttpResponse(status=200)
 
 
@@ -907,8 +922,8 @@ def extract_archive_using_batch_job(request):
   if not ENABLE_EXTRACT_UPLOADED_ARCHIVE.get():
   if not ENABLE_EXTRACT_UPLOADED_ARCHIVE.get():
     return HttpResponse("Extract archive operation is disabled by configuration.", status=500)  # TODO: status code?
     return HttpResponse("Extract archive operation is disabled by configuration.", status=500)  # TODO: status code?
 
 
-  upload_path = request.fs.netnormpath(request.POST.get('upload_path'))
-  archive_name = request.POST.get('archive_name')
+  upload_path = request.fs.netnormpath(request.POST.get("upload_path"))
+  archive_name = request.POST.get("archive_name")
 
 
   if upload_path and archive_name:
   if upload_path and archive_name:
     try:
     try:
@@ -917,7 +932,7 @@ def extract_archive_using_batch_job(request):
       # archive_name = urllib_unquote(archive_name)
       # archive_name = urllib_unquote(archive_name)
       response = extract_archive_in_hdfs(request, upload_path, archive_name)
       response = extract_archive_in_hdfs(request, upload_path, archive_name)
     except Exception as e:
     except Exception as e:
-      return HttpResponse(f'Failed to extract archive: {str(e)}', status=500)  # TODO: status code?
+      return HttpResponse(f"Failed to extract archive: {str(e)}", status=500)  # TODO: status code?
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
@@ -928,17 +943,17 @@ def compress_files_using_batch_job(request):
   if not ENABLE_EXTRACT_UPLOADED_ARCHIVE.get():
   if not ENABLE_EXTRACT_UPLOADED_ARCHIVE.get():
     return HttpResponse("Compress files operation is disabled by configuration.", status=500)  # TODO: status code?
     return HttpResponse("Compress files operation is disabled by configuration.", status=500)  # TODO: status code?
 
 
-  upload_path = request.fs.netnormpath(request.POST.get('upload_path'))
-  archive_name = request.POST.get('archive_name')
-  file_names = request.POST.getlist('file_name')
+  upload_path = request.fs.netnormpath(request.POST.get("upload_path"))
+  archive_name = request.POST.get("archive_name")
+  file_names = request.POST.getlist("file_name")
 
 
   if upload_path and file_names and archive_name:
   if upload_path and file_names and archive_name:
     try:
     try:
       response = compress_files_in_hdfs(request, file_names, upload_path, archive_name)
       response = compress_files_in_hdfs(request, file_names, upload_path, archive_name)
     except Exception as e:
     except Exception as e:
-      return HttpResponse(f'Failed to compress files: {str(e)}', status=500)  # TODO: status code?
+      return HttpResponse(f"Failed to compress files: {str(e)}", status=500)  # TODO: status code?
   else:
   else:
-    return HttpResponse('Output directory is not set.', status=500)  # TODO: status code?
+    return HttpResponse("Output directory is not set.", status=500)  # TODO: status code?
 
 
   return JsonResponse(response)
   return JsonResponse(response)
 
 
@@ -947,11 +962,11 @@ def compress_files_using_batch_job(request):
 def get_available_space_for_upload(request):
 def get_available_space_for_upload(request):
   redis_client = parse_broker_url(TASK_SERVER_V2.BROKER_URL.get())
   redis_client = parse_broker_url(TASK_SERVER_V2.BROKER_URL.get())
   try:
   try:
-    upload_available_space = int(redis_client.get('upload_available_space'))
+    upload_available_space = int(redis_client.get("upload_available_space"))
     if upload_available_space is None:
     if upload_available_space is None:
       return HttpResponse("upload_available_space key is not set in Redis.", status=500)  # TODO: status code?
       return HttpResponse("upload_available_space key is not set in Redis.", status=500)  # TODO: status code?
 
 
-    return JsonResponse({'upload_available_space': upload_available_space})
+    return JsonResponse({"upload_available_space": upload_available_space})
   except Exception as e:
   except Exception as e:
     message = f"Failed to get available space from Redis: {str(e)}"
     message = f"Failed to get available space from Redis: {str(e)}"
     LOG.exception(message)
     LOG.exception(message)
@@ -964,15 +979,15 @@ def get_available_space_for_upload(request):
 def bulk_op(request, op):
 def bulk_op(request, op):
   # TODO: Also try making a generic request data fetching helper method
   # TODO: Also try making a generic request data fetching helper method
   bulk_dict = request.POST.copy()
   bulk_dict = request.POST.copy()
-  path_list = request.POST.getlist('source_path') if op in (copy, move) else request.POST.getlist('path')
+  path_list = request.POST.getlist("source_path") if op in (copy, move) else request.POST.getlist("path")
 
 
   error_dict = {}
   error_dict = {}
   for p in path_list:
   for p in path_list:
     tmp_dict = bulk_dict
     tmp_dict = bulk_dict
     if op in (copy, move):
     if op in (copy, move):
-      tmp_dict['source_path'] = p
+      tmp_dict["source_path"] = p
     else:
     else:
-      tmp_dict['path'] = p
+      tmp_dict["path"] = p
 
 
     request.POST = tmp_dict
     request.POST = tmp_dict
     response = op(request)
     response = op(request)
@@ -980,11 +995,11 @@ def bulk_op(request, op):
     if response.status_code != 200:
     if response.status_code != 200:
       # TODO: Improve the error handling with new error UX
       # TODO: Improve the error handling with new error UX
       # Currently, we are storing the error in the error_dict based on response type for each path
       # Currently, we are storing the error in the error_dict based on response type for each path
-      res_content = response.content.decode('utf-8')
+      res_content = response.content.decode("utf-8")
       if isinstance(response, JsonResponse):
       if isinstance(response, JsonResponse):
         error_dict[p] = json.loads(res_content)  # Simply assign to not have dupicate error fields
         error_dict[p] = json.loads(res_content)  # Simply assign to not have dupicate error fields
       else:
       else:
-        error_dict[p] = {'error': res_content}
+        error_dict[p] = {"error": res_content}
 
 
   if error_dict:
   if error_dict:
     return JsonResponse(error_dict, status=500)  # TODO: Check if we need diff status code or diff json structure?
     return JsonResponse(error_dict, status=500)  # TODO: Check if we need diff status code or diff json structure?
@@ -998,13 +1013,13 @@ def _massage_stats(request, stats):
   into the format that the views would like it in.
   into the format that the views would like it in.
   """
   """
   stats_dict = stats.to_json_dict()
   stats_dict = stats.to_json_dict()
-  normalized_path = request.fs.normpath(stats_dict.get('path'))
+  normalized_path = request.fs.normpath(stats_dict.get("path"))
 
 
   stats_dict.update(
   stats_dict.update(
     {
     {
-      'path': normalized_path,
-      'type': filetype(stats.mode),
-      'rwx': rwx(stats.mode, stats.aclBit),
+      "path": normalized_path,
+      "type": filetype(stats.mode),
+      "rwx": rwx(stats.mode, stats.aclBit),
     }
     }
   )
   )
 
 

File diff ditekan karena terlalu besar
+ 198 - 499
apps/filebrowser/src/filebrowser/api_test.py


+ 26 - 0
apps/filebrowser/src/filebrowser/serializers.py

@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+# Licensed to Cloudera, Inc. under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  Cloudera, Inc. licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from rest_framework import serializers
+
+
+class UploadFileSerializer(serializers.Serializer):
+  """
+  Validates the query parameters for the file upload API.
+  """
+
+  destination_path = serializers.CharField(required=True, allow_blank=False)
+  overwrite = serializers.BooleanField(default=False)

+ 121 - 0
apps/filebrowser/src/filebrowser/serializers_tests.py

@@ -0,0 +1,121 @@
+#!/usr/bin/env python
+# Licensed to Cloudera, Inc. under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  Cloudera, Inc. licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from filebrowser.serializers import UploadFileSerializer
+
+
+class TestUploadFileSerializer:
+  def test_valid_data(self):
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/uploads/"})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["destination_path"] == "s3a://test_bucket/test/uploads/"
+    assert serializer.validated_data["overwrite"] is False  # Default value
+
+  def test_valid_data_with_overwrite(self):
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/uploads/", "overwrite": True})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["destination_path"] == "s3a://test_bucket/test/uploads/"
+    assert serializer.validated_data["overwrite"] is True
+
+  def test_missing_destination_path(self):
+    serializer = UploadFileSerializer(data={})
+
+    assert not serializer.is_valid()
+    assert "destination_path" in serializer.errors
+    assert any("required" in str(error).lower() for error in serializer.errors["destination_path"])
+
+  def test_empty_destination_path(self):
+    serializer = UploadFileSerializer(data={"destination_path": ""})
+
+    assert not serializer.is_valid()
+    assert "destination_path" in serializer.errors
+    assert any("blank" in str(error).lower() for error in serializer.errors["destination_path"])
+
+  def test_none_destination_path(self):
+    serializer = UploadFileSerializer(data={"destination_path": None})
+
+    assert not serializer.is_valid()
+    assert "destination_path" in serializer.errors
+    assert any("null" in str(error).lower() or "none" in str(error).lower() for error in serializer.errors["destination_path"])
+
+  def test_overwrite_string_values(self):
+    # Test with string 'true'
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/", "overwrite": "true"})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["overwrite"] is True
+
+    # Test with string 'false'
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/", "overwrite": "false"})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["overwrite"] is False
+
+  def test_overwrite_numeric_values(self):
+    """Test serializer handles numeric boolean values correctly."""
+    # Test with 1
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/", "overwrite": 1})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["overwrite"] is True
+
+    # Test with 0
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/", "overwrite": 0})
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert serializer.validated_data["overwrite"] is False
+
+  def test_invalid_overwrite_value(self):
+    serializer = UploadFileSerializer(data={"destination_path": "s3a://test_bucket/test/", "overwrite": "invalid"})
+
+    assert not serializer.is_valid()
+    assert "overwrite" in serializer.errors
+
+  def test_extra_fields_ignored(self):
+    serializer = UploadFileSerializer(
+      data={
+        "destination_path": "s3a://test_bucket/test/uploads/",
+        "overwrite": False,
+        "extra_field": "should be ignored",
+        "another_field": 123,
+      }
+    )
+
+    assert serializer.is_valid(), f"Serializer validation failed: {serializer.errors}"
+    assert "extra_field" not in serializer.validated_data
+    assert "another_field" not in serializer.validated_data
+    assert len(serializer.validated_data) == 2  # Only destination_path and overwrite
+
+  def test_various_path_formats(self):
+    valid_paths = [
+      "/user/test/",
+      "/tmp/uploads/",
+      "/home/user/documents/",
+      "hdfs:///user/test/",
+      "s3a://bucket/path/",
+      "s3a://test_bucket/test folder/uploads/",  # Path with spaces
+      "abfs://container@account.dfs.core.windows.net/path/",
+      "./relative/path/",
+      "../parent/path/",
+    ]
+
+    for path in valid_paths:
+      serializer = UploadFileSerializer(data={"destination_path": path})
+      assert serializer.is_valid(), f"Path '{path}' should be valid. Errors: {serializer.errors}"
+      assert serializer.validated_data["destination_path"] == path

+ 56 - 0
apps/filebrowser/src/filebrowser/utils.py

@@ -22,8 +22,11 @@ from urllib.parse import urlparse
 import redis
 import redis
 
 
 from desktop.conf import TASK_SERVER_V2
 from desktop.conf import TASK_SERVER_V2
+from desktop.lib import fsmanager
 from desktop.lib.django_util import JsonResponse
 from desktop.lib.django_util import JsonResponse
+from desktop.lib.fs.proxyfs import ProxyFS
 from filebrowser.conf import ALLOW_FILE_EXTENSIONS, ARCHIVE_UPLOAD_TEMPDIR, RESTRICT_FILE_EXTENSIONS
 from filebrowser.conf import ALLOW_FILE_EXTENSIONS, ARCHIVE_UPLOAD_TEMPDIR, RESTRICT_FILE_EXTENSIONS
+from filebrowser.lib.rwx import filetype, rwx
 
 
 LOG = logging.getLogger()
 LOG = logging.getLogger()
 
 
@@ -31,6 +34,35 @@ LOG = logging.getLogger()
 DEFAULT_WRITE_SIZE = 1024 * 1024 * 128
 DEFAULT_WRITE_SIZE = 1024 * 1024 * 128
 
 
 
 
+def get_user_fs(username: str) -> ProxyFS:
+  """Get a filesystem proxy for the given user.
+
+  This function returns a ProxyFS instance, which is a filesystem-like object
+  that routes operations to the appropriate underlying filesystem based on the
+  path's URI scheme (e.g., 'abfs://', 's3a://').
+
+  If a path has no scheme, it defaults to the first available filesystem
+  configured in Hue (e.g. HDFS). All operations are performed on behalf
+  of the specified user.
+
+  Args:
+    username: The name of the user to impersonate for filesystem operations.
+
+  Returns:
+    A ProxyFS object that can be used to access any configured filesystem.
+
+  Raises:
+    ValueError: If the username is empty.
+  """
+  if not username:
+    raise ValueError("Username is required")
+
+  fs = fsmanager.get_filesystem("default")
+  fs.setuser(username)
+
+  return fs
+
+
 def calculate_total_size(uuid, totalparts):
 def calculate_total_size(uuid, totalparts):
   total = 0
   total = 0
   files = [os.path.join(ARCHIVE_UPLOAD_TEMPDIR.get(), f'{uuid}_{i}') for i in range(totalparts)]
   files = [os.path.join(ARCHIVE_UPLOAD_TEMPDIR.get(), f'{uuid}_{i}') for i in range(totalparts)]
@@ -168,3 +200,27 @@ def is_file_upload_allowed(file_name):
       return False, f'File type "{file_type}" is restricted. Update file extension restrictions to allow this type.'
       return False, f'File type "{file_type}" is restricted. Update file extension restrictions to allow this type.'
 
 
   return True, None
   return True, None
+
+
+def massage_stats(stats):
+  """Converts a file stats object into a dictionary with extra fields.
+
+  This function takes a file stats object (typically from an underlying
+  filesystem), converts it to a JSON-compatible dictionary, and enriches it
+  with 'type' (e.g., 'file', 'dir') and 'rwx' (e.g., 'rwxr-x---') fields.
+
+  Args:
+    stats: A file stats object from a filesystem implementation.
+
+  Returns:
+    A dictionary containing the file's stats and additional metadata.
+  """
+  stats_dict = stats.to_json_dict()
+  stats_dict.update(
+    {
+      "type": filetype(stats.mode),
+      "rwx": rwx(stats.mode, stats.aclBit),
+    }
+  )
+
+  return stats_dict

+ 62 - 1
apps/filebrowser/src/filebrowser/utils_test.py

@@ -15,8 +15,12 @@
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
 
 
+from unittest.mock import MagicMock, patch
+
+import pytest
+
 from filebrowser.conf import ALLOW_FILE_EXTENSIONS, RESTRICT_FILE_EXTENSIONS
 from filebrowser.conf import ALLOW_FILE_EXTENSIONS, RESTRICT_FILE_EXTENSIONS
-from filebrowser.utils import is_file_upload_allowed
+from filebrowser.utils import get_user_fs, is_file_upload_allowed
 
 
 
 
 class TestIsFileUploadAllowed:
 class TestIsFileUploadAllowed:
@@ -289,3 +293,60 @@ class TestIsFileUploadAllowed:
     finally:
     finally:
       reset_allow()
       reset_allow()
       reset_restrict()
       reset_restrict()
+
+
+class TestGetUserFs:
+  @patch("filebrowser.utils.fsmanager.get_filesystem")
+  def test_get_user_fs_success(self, mock_get_filesystem):
+    mock_fs = MagicMock()
+    mock_get_filesystem.return_value = mock_fs
+
+    result = get_user_fs("test_user")
+
+    assert result == mock_fs
+    mock_get_filesystem.assert_called_once_with("default")
+    mock_fs.setuser.assert_called_once_with("test_user")
+
+  @patch("filebrowser.utils.fsmanager.get_filesystem")
+  def test_get_user_fs_empty_username(self, mock_get_filesystem):
+    with pytest.raises(ValueError) as exc_info:
+      get_user_fs("")
+
+    assert str(exc_info.value) == "Username is required"
+    mock_get_filesystem.assert_not_called()
+
+  @patch("filebrowser.utils.fsmanager.get_filesystem")
+  def test_get_user_fs_none_username(self, mock_get_filesystem):
+    with pytest.raises(ValueError) as exc_info:
+      get_user_fs(None)
+
+    assert str(exc_info.value) == "Username is required"
+    mock_get_filesystem.assert_not_called()
+
+  @patch("filebrowser.utils.fsmanager.get_filesystem")
+  def test_get_user_fs_various_usernames(self, mock_get_filesystem):
+    mock_fs = MagicMock()
+    mock_get_filesystem.return_value = mock_fs
+
+    test_usernames = [
+      "user1",
+      "test-user",
+      "user.name",
+      "user_name",
+      "user123",
+      "user@domain.com",
+      "user with spaces",  # Unusual but should work
+      "user_with_unicode_ñáme",
+      "用户名",
+      "very_long_username_that_is_still_valid_123456789",
+    ]
+
+    for username in test_usernames:
+      mock_get_filesystem.reset_mock()
+      mock_fs.reset_mock()
+
+      result = get_user_fs(username)
+
+      assert result == mock_fs, f"Failed for username: {username}"
+      mock_get_filesystem.assert_called_once_with("default")
+      mock_fs.setuser.assert_called_once_with(username)

+ 0 - 6
desktop/core/src/desktop/api_public.py

@@ -276,12 +276,6 @@ def storage_save_file(request):
   return filebrowser_api.save_file(django_request)
   return filebrowser_api.save_file(django_request)
 
 
 
 
-@api_view(["POST"])
-def storage_upload_file(request):
-  django_request = get_django_request(request)
-  return filebrowser_api.upload_file(django_request)
-
-
 @api_view(["POST"])
 @api_view(["POST"])
 def storage_upload_chunks(request):
 def storage_upload_chunks(request):
   django_request = get_django_request(request)
   django_request = get_django_request(request)

+ 2 - 1
desktop/core/src/desktop/api_public_urls_v1.py

@@ -21,6 +21,7 @@ from about import api as about_api
 from desktop import api_public
 from desktop import api_public
 from desktop.lib.botserver import api as botserver_api
 from desktop.lib.botserver import api as botserver_api
 from desktop.lib.importer import api as importer_api
 from desktop.lib.importer import api as importer_api
+from filebrowser import api as filebrowser_api
 
 
 # "New" query API (i.e. connector based, lean arguments).
 # "New" query API (i.e. connector based, lean arguments).
 # e.g. https://demo.gethue.com/api/query/execute/hive
 # e.g. https://demo.gethue.com/api/query/execute/hive
@@ -114,7 +115,7 @@ urlpatterns += [
   re_path(r'^storage/rename/?$', api_public.storage_rename, name='storage_rename'),
   re_path(r'^storage/rename/?$', api_public.storage_rename, name='storage_rename'),
   re_path(r'^storage/move/?$', api_public.storage_move, name='storage_move'),
   re_path(r'^storage/move/?$', api_public.storage_move, name='storage_move'),
   re_path(r'^storage/copy/?$', api_public.storage_copy, name='storage_copy'),
   re_path(r'^storage/copy/?$', api_public.storage_copy, name='storage_copy'),
-  re_path(r'^storage/upload/file/?$', api_public.storage_upload_file, name='storage_upload_file'),
+  re_path(r'^storage/upload/file/?$', filebrowser_api.UploadFileAPI.as_view(), name='storage_upload_file'),
   re_path(r'^storage/upload/chunks/?$', api_public.storage_upload_chunks, name='storage_upload_chunks'),
   re_path(r'^storage/upload/chunks/?$', api_public.storage_upload_chunks, name='storage_upload_chunks'),
   re_path(r'^storage/upload/complete/?$', api_public.storage_upload_complete, name='storage_upload_complete'),
   re_path(r'^storage/upload/complete/?$', api_public.storage_upload_complete, name='storage_upload_complete'),
   re_path(r'^storage/stat/?$', api_public.storage_stat, name='storage_stat'),
   re_path(r'^storage/stat/?$', api_public.storage_stat, name='storage_stat'),

+ 7 - 17
desktop/core/src/desktop/lib/fs/gc/gs.py

@@ -14,22 +14,18 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
-import os
-import re
-import time
 import logging
 import logging
 import posixpath
 import posixpath
+import re
 
 
 from boto.exception import BotoClientError, GSResponseError
 from boto.exception import BotoClientError, GSResponseError
-from boto.gs.connection import Location
 from boto.gs.key import Key
 from boto.gs.key import Key
 from boto.s3.prefix import Prefix
 from boto.s3.prefix import Prefix
-from django.http.multipartparser import MultiPartParser
 from django.utils.translation import gettext as _
 from django.utils.translation import gettext as _
 
 
 from aws.s3.s3fs import S3FileSystem
 from aws.s3.s3fs import S3FileSystem
-from desktop.conf import GC_ACCOUNTS, PERMISSION_ACTION_GS, is_raz_gs
-from desktop.lib.fs.gc import GS_ROOT, abspath, join as gs_join, normpath, parse_uri, translate_gs_error
+from desktop.conf import GC_ACCOUNTS, is_raz_gs, PERMISSION_ACTION_GS
+from desktop.lib.fs.gc import abspath, GS_ROOT, join as gs_join, normpath, parse_uri, translate_gs_error
 from desktop.lib.fs.gc.gsfile import open as gsfile_open
 from desktop.lib.fs.gc.gsfile import open as gsfile_open
 from desktop.lib.fs.gc.gsstat import GSStat
 from desktop.lib.fs.gc.gsstat import GSStat
 from filebrowser.conf import REMOTE_STORAGE_HOME
 from filebrowser.conf import REMOTE_STORAGE_HOME
@@ -290,7 +286,7 @@ class GSFileSystem(S3FileSystem):
       GSFileSystemException: If the removal operation fails.
       GSFileSystemException: If the removal operation fails.
     """
     """
     if not skipTrash:
     if not skipTrash:
-      raise NotImplementedError(_('Moving to trash is not implemented for GS'))
+      raise NotImplementedError("Moving to trash is not implemented for GS")
 
 
     bucket_name, key_name = parse_uri(path)[:2]
     bucket_name, key_name = parse_uri(path)[:2]
     if bucket_name and not key_name:
     if bucket_name and not key_name:
@@ -479,12 +475,6 @@ class GSFileSystem(S3FileSystem):
     else:
     else:
       return False
       return False
 
 
-  @translate_gs_error
-  @auth_error_handler
-  def upload_v1(self, META, input_data, destination, username):
-    from desktop.lib.fs.gc.upload import GSNewFileUploadHandler  # Circular dependency
-
-    gs_upload_handler = GSNewFileUploadHandler(destination, username)
-
-    parser = MultiPartParser(META, input_data, [gs_upload_handler])
-    return parser.parse()
+  def get_upload_handler(self, destination_path, overwrite):
+    from desktop.lib.fs.gc.upload import GSNewFileUploadHandler
+    return GSNewFileUploadHandler(self, destination_path, overwrite)

+ 134 - 33
desktop/core/src/desktop/lib/fs/gc/upload.py

@@ -22,15 +22,18 @@ See http://docs.djangoproject.com/en/1.9/topics/http/file-uploads/
 """
 """
 
 
 import logging
 import logging
+import os
 from io import BytesIO as stream_io
 from io import BytesIO as stream_io
 
 
 from django.core.files.uploadedfile import SimpleUploadedFile
 from django.core.files.uploadedfile import SimpleUploadedFile
 from django.core.files.uploadhandler import FileUploadHandler, StopFutureHandlers, StopUpload, UploadFileException
 from django.core.files.uploadhandler import FileUploadHandler, StopFutureHandlers, StopUpload, UploadFileException
 
 
+from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.fs.gc import parse_uri
 from desktop.lib.fs.gc import parse_uri
 from desktop.lib.fs.gc.gs import GSFileSystemException
 from desktop.lib.fs.gc.gs import GSFileSystemException
 from desktop.lib.fsmanager import get_client
 from desktop.lib.fsmanager import get_client
-from filebrowser.utils import is_file_upload_allowed
+from filebrowser.conf import MAX_FILE_SIZE_UPLOAD_LIMIT
+from filebrowser.utils import is_file_upload_allowed, massage_stats
 
 
 LOG = logging.getLogger()
 LOG = logging.getLogger()
 
 
@@ -186,48 +189,146 @@ class GSFileUploadHandler(FileUploadHandler):
     return fp
     return fp
 
 
 
 
-class GSNewFileUploadHandler(GSFileUploadHandler):
-  """This handler uploads the file to Google Storage if the destination path starts with "GS" (case insensitive).
-  Streams data chunks directly to Google Cloud Storage (GS).
+class GSNewFileUploadHandler(FileUploadHandler):
+  """
+  Handles direct file uploads to Google Cloud Storage using multipart streaming.
+
+  This handler bypasses local storage and streams file chunks directly to GCS,
+  enabling efficient handling of large files without memory constraints.
+
+  Key features:
+  - Multipart upload for reliability and resumability
+  - Streaming chunks directly to GCS (no temporary files)
+  - Comprehensive validation and security checks
+  - Automatic cleanup on failure
   """
   """
 
 
-  def __init__(self, dest_path, username):
+  def __init__(self, fs, dest_path, overwrite):
     self.chunk_size = DEFAULT_WRITE_SIZE
     self.chunk_size = DEFAULT_WRITE_SIZE
-    self.destination = dest_path
-    self.username = username
-    self.target_path = None
-    self.file = None
-    self._mp = None
-    self._part_num = 1
+    self._fs = fs
+    self.dest_path = dest_path
+    self.overwrite = overwrite
+    self.part_number = 1
+    self.multipart_upload = None
+    self.total_bytes_received = 0
 
 
-    # TODO: _is_gs_upload really required?
-    if self._is_gs_upload():
-      self._fs = get_client(fs='gs', user=self.username)
-      self.bucket_name, self.key_name = parse_uri(self.destination)[:2]
+    self.bucket_name, self.key_name = parse_uri(self.dest_path)[:2]
 
 
-      self._bucket = self._fs._get_bucket(self.bucket_name)
+    self._bucket = self._fs._get_bucket(self.bucket_name)
+
+    LOG.info(f"GSNewFileUploadHandler initialized - destination: {dest_path}, overwrite: {overwrite}")
 
 
   def new_file(self, field_name, file_name, *args, **kwargs):
   def new_file(self, field_name, file_name, *args, **kwargs):
-    """Handle the start of a new file upload.
+    super(GSNewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
 
 
-    This method is called when a new file is encountered during the upload process.
+    LOG.info(f"Starting GCS upload for file: {file_name}")
+
+    # Validate the file upload
+    self._validate_upload_prerequisites(file_name)
+
+    self.target_key_path = self._fs.join(self.key_name, file_name)
+
+    # Create a multipart upload request
+    try:
+      LOG.debug(f"Initiating GS multipart upload to target path: {self.target_key_path}")
+      self.multipart_upload = self._bucket.initiate_multipart_upload(self.target_key_path)
+      LOG.info(f"Multipart upload initiated successfully for: {self.target_key_path}")
+    except Exception as e:
+      LOG.error(f"Failed to initiate GS multipart upload for {self.target_key_path}: {e}")
+      raise PopupException(f"Failed to initiate GS multipart upload to target path: {self.target_key_path}", error_code=500)
+
+  def _validate_upload_prerequisites(self, file_name):
+    """Validate all prerequisites before initiating file upload to GS.
+
+    Performs security and permission checks including:
+    - File extension restrictions
+    - Destination path existence and type validation
+    - Directory traversal attack prevention
+    - Write permission verification
+    - File overwrite handling based on policy
+
+    Args:
+      file_name: Name of the file to be uploaded.
+
+    Raises:
+      PopupException: With appropriate HTTP error codes:
+        - 400: Invalid file extension or filename
+        - 403: Insufficient permissions
+        - 404: Destination path not found
+        - 409: File exists and overwrite is disabled
     """
     """
-    if self._is_gs_upload():
-      super().new_file(field_name, file_name, *args, **kwargs)
+    LOG.debug(f"Validating upload prerequisites for file: {file_name}")
+
+    # Check file extension restrictions
+    is_allowed, err_message = is_file_upload_allowed(file_name)
+    if not is_allowed:
+      LOG.warning(f"File upload rejected - {err_message}")
+      raise PopupException(err_message, error_code=400)
+
+    # Check if the destination path already exists or not
+    if not self._fs.exists(self.dest_path):
+      LOG.error(f"Destination path does not exist: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} does not exist.", error_code=404)
+
+    # Check if the destination path is a directory or not
+    if not self._fs.isdir(self.dest_path):
+      LOG.error(f"Destination path is not a directory: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} is not a directory.", error_code=400)
+
+    # Check if the file name contains a path separator
+    # This prevents directory traversal attacks
+    if os.path.sep in file_name:
+      LOG.warning(f"Invalid filename with path separator: {file_name}")
+      raise PopupException("Invalid filename. Path separators are not allowed.", error_code=400)
+
+    # Check if the user has write access to the destination path
+    if not self._fs.check_access(self.dest_path, permission="WRITE"):
+      LOG.error(f"Insufficient permissions for destination: {self.dest_path}")
+      raise PopupException(f"Insufficient permissions to write to GS path {self.dest_path}.", error_code=403)
+
+    # Check if the file already exists at the destination path
+    target_file_path = self._fs.join(self.dest_path, file_name)
+    if self._fs.exists(target_file_path):
+      if self.overwrite:
+        LOG.info(f"Overwriting existing file: {target_file_path}")
+        self._fs.remove(target_file_path)
+      else:
+        LOG.warning(f"File already exists and overwrite is disabled: {target_file_path}")
+        raise PopupException(f"The file {file_name} already exists at the destination path.", error_code=409)
 
 
-      LOG.info('Using GSFileUploadHandler to handle file upload.')
-      self.target_path = self._fs.join(self.key_name, file_name)
+    LOG.debug("Upload prerequisites validation completed successfully")
 
 
-      try:
-        # Check access permissions before attempting upload
-        self._check_access()
+  def receive_data_chunk(self, raw_data, start):
+    self.total_bytes_received += len(raw_data)
+    max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
+
+    # Perform max size check on the fly
+    if max_size != -1 and max_size >= 0 and self.total_bytes_received > max_size:
+      LOG.error(f"File size exceeded limit - received: {self.total_bytes_received}, max: {max_size}")
+      raise PopupException(f"File exceeds maximum allowed size of {max_size} bytes.", error_code=413)
+
+    # Upload the chunk
+    self.upload_chunk(raw_data)
+    return None
+
+  def upload_chunk(self, raw_chunk):
+    try:
+      LOG.debug(f"Uploading part {self.part_number}, size: {len(raw_chunk)} bytes")
+      self.multipart_upload.upload_part_from_file(fp=stream_io(raw_chunk), part_num=self.part_number)
+      self.part_number += 1
+    except Exception as e:
+      LOG.error(f"Failed to upload part {self.part_number}: {e}")
+      self.multipart_upload.cancel_upload()
+      raise PopupException(f"Failed to upload part: {e}", error_code=500)
 
 
-        # Create a multipart upload request
-        LOG.debug("Initiating GS multipart upload to target path: %s" % self.target_path)
-        self._mp = self._bucket.initiate_multipart_upload(self.target_path)
-        self.file = SimpleUploadedFile(name=file_name, content='')
+  def file_complete(self, file_size):
+    # Finish the upload
+    LOG.info(f"Completing multipart upload - total size: {file_size} bytes, parts: {self.part_number - 1}")
+    self.multipart_upload.complete_upload()
 
 
-        raise StopFutureHandlers()
-      except (GSFileUploadError, GSFileSystemException) as e:
-        LOG.error("Encountered error in GSUploadHandler check_access: %s" % e)
-        raise StopUpload()
+    file_stats = self._fs.stats(f"gs://{self.bucket_name}/{self.target_key_path}")
+    file_stats = massage_stats(file_stats)
+
+    LOG.info(f"Upload completed successfully: {self.target_key_path}")
+
+    return file_stats

+ 152 - 45
desktop/core/src/desktop/lib/fs/ozone/ofs.py

@@ -19,18 +19,19 @@
 Interfaces for Hadoop filesystem access via HttpFs/WebHDFS
 Interfaces for Hadoop filesystem access via HttpFs/WebHDFS
 """
 """
 
 
-import stat
 import errno
 import errno
 import logging
 import logging
 import posixpath
 import posixpath
+import stat
+import time
+import uuid
 from urllib.parse import urlparse as lib_urlparse
 from urllib.parse import urlparse as lib_urlparse
 
 
-from django.http.multipartparser import MultiPartParser
 from django.utils.encoding import smart_str
 from django.utils.encoding import smart_str
 from django.utils.translation import gettext as _
 from django.utils.translation import gettext as _
 
 
 from desktop.conf import PERMISSION_ACTION_OFS
 from desktop.conf import PERMISSION_ACTION_OFS
-from desktop.lib.fs.ozone import OFS_ROOT, _serviceid_join, is_root, join as ofs_join, normpath, parent_path
+from desktop.lib.fs.ozone import _serviceid_join, is_root, join as ofs_join, normpath, OFS_ROOT, parent_path
 from desktop.lib.fs.ozone.ofsstat import OzoneFSStat
 from desktop.lib.fs.ozone.ofsstat import OzoneFSStat
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.webhdfs import WebHdfs
 from hadoop.fs.webhdfs import WebHdfs
@@ -82,11 +83,15 @@ class OzoneFS(WebHdfs):
       umask=get_umask_mode(),
       umask=get_umask_mode(),
     )
     )
 
 
+  @property
+  def temp_dir(self):
+    return self._temp_dir
+
   def strip_normpath(self, path):
   def strip_normpath(self, path):
     if path.startswith(OFS_ROOT + self._netloc):
     if path.startswith(OFS_ROOT + self._netloc):
       path = path.split(OFS_ROOT + self._netloc)[1]
       path = path.split(OFS_ROOT + self._netloc)[1]
-    elif path.startswith('ofs:/' + self._netloc):
-      path = path.split('ofs:/' + self._netloc)[1]
+    elif path.startswith("ofs:/" + self._netloc):
+      path = path.split("ofs:/" + self._netloc)[1]
 
 
     return path
     return path
 
 
@@ -115,13 +120,13 @@ class OzoneFS(WebHdfs):
       params = self._getparams()
       params = self._getparams()
 
 
       if glob is not None:
       if glob is not None:
-        params['filter'] = glob
-      params['op'] = 'LISTSTATUS'
+        params["filter"] = glob
+      params["op"] = "LISTSTATUS"
       headers = self._getheaders()
       headers = self._getheaders()
 
 
       json = self._root.get(path, params, headers)
       json = self._root.get(path, params, headers)
 
 
-    filestatus_list = json['FileStatuses']['FileStatus']
+    filestatus_list = json["FileStatuses"]["FileStatus"]
     return [OzoneFSStat(st, path, self._netloc) for st in filestatus_list]
     return [OzoneFSStat(st, path, self._netloc) for st in filestatus_list]
 
 
   def _stats(self, path):
   def _stats(self, path):
@@ -129,38 +134,38 @@ class OzoneFS(WebHdfs):
     This stats method returns None if the entry is not found.
     This stats method returns None if the entry is not found.
     """
     """
     if path == OFS_ROOT:
     if path == OFS_ROOT:
-      serviceid_path_status = self._handle_serviceid_path_status()['FileStatuses']['FileStatus'][0]
-      json = {'FileStatus': serviceid_path_status}
+      serviceid_path_status = self._handle_serviceid_path_status()["FileStatuses"]["FileStatus"][0]
+      json = {"FileStatus": serviceid_path_status}
     else:
     else:
       path = self.strip_normpath(path)
       path = self.strip_normpath(path)
       params = self._getparams()
       params = self._getparams()
-      params['op'] = 'GETFILESTATUS'
+      params["op"] = "GETFILESTATUS"
       headers = self._getheaders()
       headers = self._getheaders()
 
 
       try:
       try:
         json = self._root.get(path, params, headers)
         json = self._root.get(path, params, headers)
       except WebHdfsException as ex:
       except WebHdfsException as ex:
-        if ex.server_exc == 'FileNotFoundException' or ex.code == 404:
+        if ex.server_exc == "FileNotFoundException" or ex.code == 404:
           return None
           return None
         raise ex
         raise ex
 
 
-    return OzoneFSStat(json['FileStatus'], path, self._netloc)
+    return OzoneFSStat(json["FileStatus"], path, self._netloc)
 
 
   def _handle_serviceid_path_status(self):
   def _handle_serviceid_path_status(self):
     json = {
     json = {
-      'FileStatuses': {
-        'FileStatus': [
+      "FileStatuses": {
+        "FileStatus": [
           {
           {
-            'pathSuffix': self._netloc,
-            'type': 'DIRECTORY',
-            'length': 0,
-            'owner': '',
-            'group': '',
-            'permission': '777',
-            'accessTime': 0,
-            'modificationTime': 0,
-            'blockSize': 0,
-            'replication': 0,
+            "pathSuffix": self._netloc,
+            "type": "DIRECTORY",
+            "length": 0,
+            "owner": "",
+            "group": "",
+            "permission": "777",
+            "accessTime": 0,
+            "modificationTime": 0,
+            "blockSize": 0,
+            "replication": 0,
           }
           }
         ]
         ]
       }
       }
@@ -176,6 +181,111 @@ class OzoneFS(WebHdfs):
       return res
       return res
     raise IOError(errno.ENOENT, _("File %s not found") % path)
     raise IOError(errno.ENOENT, _("File %s not found") % path)
 
 
+  def check_access(self, path, permission="READ"):
+    """
+    Check if the user has the requested permission for a given path.
+
+    Since Ozone doesn't have a native check access API, this method verifies access
+    by attempting operations that would require the specified permission level.
+
+    Args:
+      path (str): The OFS path to check access for
+      permission (str): Permission type to check - 'READ' or 'WRITE' (case-insensitive)
+
+    Returns:
+      bool: True if user has the requested permission, False otherwise
+
+    Note:
+      - For READ permission: Checks if path exists and tries to access its metadata
+      - For WRITE permission: For directories, attempts to create a temporary file;
+        for files or non-existent paths, checks parent directory write access
+    """
+    permission = permission.upper()
+
+    if permission not in ("READ", "WRITE"):
+      LOG.warning(f'Invalid permission type "{permission}". Must be READ or WRITE.')
+      return False
+
+    try:
+      if permission == "READ":
+        # For read access, we need to verify the path exists and is accessible
+        if not self.exists(path):
+          LOG.debug(f'Path "{path}" does not exist, cannot read.')
+          return False
+
+        try:
+          if self.isdir(path):
+            # For directories, attempt to list contents
+            # Use a small limit for efficiency
+            self.listdir_stats(path)[:1]
+          else:
+            # For files, get file stats
+            self.stats(path)
+          return True
+        except WebHdfsException as e:
+          if e.code in (401, 403):  # Unauthorized or Forbidden
+            LOG.debug(f'No read permission for path "{path}": {str(e)}')
+            return False
+          # Re-raise unexpected errors
+          raise
+
+      # Check WRITE permission
+      else:
+        # For non-existent paths, check parent directory
+        if not self.exists(path):
+          parent = self.parent_path(path)
+
+          # If we can't determine parent or we're at root, deny access
+          if not parent or parent == path:
+            LOG.debug(f'Cannot determine parent for non-existent path "{path}"')
+            return False
+
+          # Recursively check parent write access
+          return self.check_access(parent, permission="WRITE")
+
+        # For existing paths
+        if self.isdir(path):
+          # For directories, try creating a temporary marker file
+          temp_file = None
+          try:
+            # Generate unique temporary filename with timestamp
+            temp_file = self.join(path, f".hue_access_check_{str(int(time.time() * 1000))}_{str(uuid.uuid4())[:8]}")
+
+            # Attempt to create the temporary file
+            self.create(temp_file, overwrite=True, data="")
+
+            # Clean up the temporary file if creation succeeded
+            try:
+              self.remove(temp_file)
+            except Exception as cleanup_error:
+              LOG.warning(f'Failed to clean up temporary file "{temp_file}": {cleanup_error}')
+
+            return True
+
+          except WebHdfsException as e:
+            if e.code in (401, 403):  # Unauthorized or Forbidden
+              LOG.debug(f'No write permission for directory "{path}": {str(e)}')
+              return False
+            # Re-raise unexpected errors
+            raise
+
+        else:
+          # For files, check write permission on parent directory
+          parent = self.parent_path(path)
+          if parent and parent != path:
+            return self.check_access(parent, permission="WRITE")
+          else:
+            LOG.debug(f'Cannot check write access for file "{path}", no valid parent found')
+            return False
+
+    except WebHdfsException as e:
+      LOG.debug(f'Ozone filesystem error checking {permission} permission at path "{path}": {str(e)}')
+      return False
+    except Exception as e:
+      # Log unexpected errors but don't crash
+      LOG.warning(f'Unexpected error checking {permission} permission at path "{path}": {str(e)}')
+      return False
+
   def filebrowser_action(self):
   def filebrowser_action(self):
     return self._filebrowser_action
     return self._filebrowser_action
 
 
@@ -186,14 +296,6 @@ class OzoneFS(WebHdfs):
     """
     """
     pass
     pass
 
 
-  def upload_v1(self, META, input_data, destination, username):
-    from desktop.lib.fs.ozone.upload import OFSNewFileUploadHandler  # Circular dependency
-
-    ofs_upload_handler = OFSNewFileUploadHandler(destination, username)
-
-    parser = MultiPartParser(META, input_data, [ofs_upload_handler])
-    return parser.parse()
-
   def rename(self, old, new):
   def rename(self, old, new):
     """rename(old, new)"""
     """rename(old, new)"""
     old = self.strip_normpath(old)
     old = self.strip_normpath(old)
@@ -202,15 +304,15 @@ class OzoneFS(WebHdfs):
     new = self.strip_normpath(new)
     new = self.strip_normpath(new)
 
 
     params = self._getparams()
     params = self._getparams()
-    params['op'] = 'RENAME'
+    params["op"] = "RENAME"
     # Encode `new' because it's in the params
     # Encode `new' because it's in the params
-    params['destination'] = smart_str(new)
+    params["destination"] = smart_str(new)
     headers = self._getheaders()
     headers = self._getheaders()
 
 
     result = self._root.put(old, params, headers=headers)
     result = self._root.put(old, params, headers=headers)
 
 
-    if not result['boolean']:
-      raise IOError(_("Rename failed: %s -> %s") % (smart_str(old, errors='replace'), smart_str(new, errors='replace')))
+    if not result["boolean"]:
+      raise IOError(_("Rename failed: %s -> %s") % (smart_str(old, errors="replace"), smart_str(new, errors="replace")))
 
 
   def rename_star(self, old_dir, new_dir):
   def rename_star(self, old_dir, new_dir):
     """Equivalent to `mv old_dir/* new"""
     """Equivalent to `mv old_dir/* new"""
@@ -236,15 +338,15 @@ class OzoneFS(WebHdfs):
     if not self.exists(destination):
     if not self.exists(destination):
       self.do_as_user(owner, self.mkdir, destination, mode=dir_mode)
       self.do_as_user(owner, self.mkdir, destination, mode=dir_mode)
 
 
-    for stat in self.listdir_stats(source):
-      source_file = stat.path
-      destination_file = posixpath.join(destination, stat.name)
-      if stat.isDir:
+    for s in self.listdir_stats(source):
+      source_file = s.path
+      destination_file = posixpath.join(destination, s.name)
+      if s.isDir:
         self.copy_remote_dir(source_file, destination_file, dir_mode, owner, skip_file_list)
         self.copy_remote_dir(source_file, destination_file, dir_mode, owner, skip_file_list)
       else:
       else:
-        if stat.size > self.get_upload_chuck_size():
+        if s.size > self.get_upload_chuck_size():
           if skip_file_list is not None:
           if skip_file_list is not None:
-            skip_file_list += ' \n- ' + source_file
+            skip_file_list += " \n- " + source_file
         else:
         else:
           self.do_as_user(owner, self.copyfile, source_file, destination_file)
           self.do_as_user(owner, self.copyfile, source_file, destination_file)
     return skip_file_list
     return skip_file_list
@@ -282,7 +384,7 @@ class OzoneFS(WebHdfs):
     if not self.exists(src):
     if not self.exists(src):
       raise IOError(errno.ENOENT, _("File not found: %s") % src)
       raise IOError(errno.ENOENT, _("File not found: %s") % src)
 
 
-    skip_file_list = ''  # Store the files to skip copying which are greater than the upload_chunck_size()
+    skip_file_list = ""  # Store the files to skip copying which are greater than the upload_chunck_size()
 
 
     if self.isdir(src):
     if self.isdir(src):
       # 'src' is directory.
       # 'src' is directory.
@@ -316,6 +418,11 @@ class OzoneFS(WebHdfs):
         else:
         else:
           self.copyfile(src, dest)
           self.copyfile(src, dest)
       else:
       else:
-        skip_file_list += ' \n- ' + src
+        skip_file_list += " \n- " + src
 
 
     return skip_file_list
     return skip_file_list
+
+  def get_upload_handler(self, destination_path, overwrite):
+    from desktop.lib.fs.ozone.upload import OFSNewFileUploadHandler
+
+    return OFSNewFileUploadHandler(self, destination_path, overwrite)

+ 225 - 32
desktop/core/src/desktop/lib/fs/ozone/upload.py

@@ -16,6 +16,8 @@
 
 
 import io
 import io
 import logging
 import logging
+import os
+import tempfile
 import unicodedata
 import unicodedata
 
 
 from django.core.files.uploadedfile import SimpleUploadedFile
 from django.core.files.uploadedfile import SimpleUploadedFile
@@ -25,7 +27,8 @@ from django.utils.translation import gettext as _
 from desktop.conf import TASK_SERVER_V2
 from desktop.conf import TASK_SERVER_V2
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.fsmanager import get_client
 from desktop.lib.fsmanager import get_client
-from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed
+from filebrowser.conf import MAX_FILE_SIZE_UPLOAD_LIMIT
+from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed, massage_stats
 from hadoop.conf import UPLOAD_CHUNK_SIZE
 from hadoop.conf import UPLOAD_CHUNK_SIZE
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.exceptions import WebHdfsException
 
 
@@ -281,46 +284,236 @@ class OFSFileUploadHandler(FileUploadHandler):
       return None
       return None
 
 
 
 
-class OFSNewFileUploadHandler(OFSFileUploadHandler):
+class OFSNewFileUploadHandler(FileUploadHandler):
   """
   """
-  This handler uploads the file to Apache Ozone if the destination path starts with "OFS" (case insensitive).
-  Streams data chunks directly to OFS.
+  Handles file uploads to Ozone File System using temporary file buffering.
+
+  Unlike direct streaming approaches, this handler uses a temporary file to buffer
+  the entire upload before transferring to Ozone, as Ozone lacks native append/concat APIs.
+
+  Key features:
+  - Temporary file buffering for reliable uploads
+  - Automatic cleanup of temp files on success/failure
+  - Comprehensive validation and security checks
+  - Memory-efficient handling of large files
   """
   """
 
 
-  def __init__(self, dest_path, username):
+  def __init__(self, fs, dest_path, overwrite):
     self.chunk_size = UPLOAD_CHUNK_SIZE.get()
     self.chunk_size = UPLOAD_CHUNK_SIZE.get()
-    self.destination = dest_path
-    self.username = username
-    self.target_path = None
-    self.file = None
-    self._part_size = UPLOAD_CHUNK_SIZE.get()
+    self._fs = fs
+    self.dest_path = dest_path
+    self.overwrite = overwrite
+    self.total_bytes_received = 0
+    self.target_file_path = None
+    self._temp_file = None
+    self._temp_file_path = None
 
 
-    # TODO: _is_ofs_upload really required?
-    if self._is_ofs_upload():
-      self._fs = self._get_ofs(self.username)
-
-    LOG.debug("Chunk size = %d" % UPLOAD_CHUNK_SIZE.get())
+    LOG.info(f"OFSNewFileUploadHandler initialized - destination: {dest_path}, overwrite: {overwrite}")
 
 
   def new_file(self, field_name, file_name, *args, **kwargs):
   def new_file(self, field_name, file_name, *args, **kwargs):
-    if self._is_ofs_upload():
-      super(OFSFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
+    super(OFSNewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
 
 
-      LOG.info('Using OFSFileUploadHandler to handle file upload.')
-      self.target_path = self._fs.join(self.destination, file_name)
+    LOG.info(f"Starting OFS upload for file: {file_name}")
+
+    # Validate upload prerequisites
+    self._validate_upload_prerequisites(file_name)
+
+    # Build the target path
+    self.target_file_path = self._fs.join(self.dest_path, file_name)
+    LOG.info(f"OFS upload target path: {self.target_file_path}")
 
 
+    # Create a temporary file in the configured temp directory to buffer the upload data
+    try:
+      self._temp_file = tempfile.NamedTemporaryFile(
+        mode="wb",
+        dir=self._fs.temp_dir,
+        prefix="ofs_upload_",
+        suffix=".tmp",
+        delete=False,  # We'll handle deletion manually for better error handling
+      )
+      self._temp_file_path = self._temp_file.name
+      LOG.info(f"Created temporary file for OFS upload: {self._temp_file_path}")
+    except Exception as ex:
+      LOG.error(f"Failed to create temporary file for upload: {ex}")
+      raise PopupException("Failed to create temporary upload file: %s" % ex, error_code=500)
+
+    LOG.debug("OFS upload initialization completed successfully")
+
+  def _validate_upload_prerequisites(self, file_name):
+    """Validate all prerequisites before initiating file upload to Ozone.
+
+    Performs security and permission checks including:
+    - File extension restrictions
+    - Destination path existence and type validation
+    - Directory traversal attack prevention
+    - Write permission verification
+    - File overwrite handling based on policy
+
+    Args:
+      file_name: Name of the file to be uploaded.
+
+    Raises:
+      PopupException: With appropriate HTTP error codes:
+        - 400: Invalid file extension or filename
+        - 403: Insufficient permissions
+        - 404: Destination path not found
+        - 409: File exists and overwrite is disabled
+    """
+    LOG.debug(f"Validating upload prerequisites for file: {file_name}")
+
+    # Check file extension restrictions
+    is_allowed, err_message = is_file_upload_allowed(file_name)
+    if not is_allowed:
+      LOG.warning(f"File upload rejected - {err_message}")
+      raise PopupException(err_message, error_code=400)
+
+    # Check if the destination path already exists or not
+    if not self._fs.exists(self.dest_path):
+      LOG.error(f"Destination path does not exist: {self.dest_path}")
+      raise PopupException("The destination path %s does not exist." % self.dest_path, error_code=404)
+
+    # Check if the destination path is a directory or not
+    if not self._fs.isdir(self.dest_path):
+      LOG.error(f"Destination path is not a directory: {self.dest_path}")
+      raise PopupException("The destination path %s is not a directory." % self.dest_path, error_code=400)
+
+    # Check if the file name contains a path separator
+    # This prevents directory traversal attacks
+    if os.path.sep in file_name:
+      LOG.warning(f"Invalid filename with path separator: {file_name}")
+      raise PopupException("Invalid filename. Path separators are not allowed.", error_code=400)
+
+    # Check if the user has write access to the destination path
+    if not self._fs.check_access(self.dest_path, "WRITE"):
+      LOG.error(f"Insufficient permissions for destination: {self.dest_path}")
+      raise PopupException("Insufficient permissions to write to OFS path %s." % self.dest_path, error_code=403)
+
+    # Build the target path for file existence check
+    target_file_path = self._fs.join(self.dest_path, file_name)
+
+    # Check if file exists and handle overwrite
+    if self._fs.exists(target_file_path):
+      if self.overwrite:
+        LOG.info(f"Overwriting existing file: {target_file_path}")
+        self._fs.remove(target_file_path, skip_trash=True)
+      else:
+        LOG.warning(f"File already exists and overwrite is disabled: {target_file_path}")
+        raise PopupException("File already exists: %s" % target_file_path, error_code=409)
+
+    LOG.debug("Upload prerequisites validation completed successfully")
+
+  def receive_data_chunk(self, raw_data, start):
+    if not self._temp_file:
+      LOG.error("Upload handler not properly initialized - temp file is None")
+      raise PopupException("Upload handler not properly initialized", error_code=500)
+
+    self.total_bytes_received += len(raw_data)
+    max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
+
+    # Perform max size check on the fly
+    if max_size != -1 and max_size >= 0 and self.total_bytes_received > max_size:
+      LOG.error(f"File size exceeded limit - received: {self.total_bytes_received}, max: {max_size}")
+      self._cleanup_temp_file()
+      raise PopupException("File exceeds maximum allowed size of %d bytes." % max_size, error_code=413)
+
+    # Write the data chunk to the temporary file
+    try:
+      self._temp_file.write(raw_data)
+      self._temp_file.flush()  # Ensure data is written to disk
+      LOG.debug(f"Written chunk to temp file - size: {len(raw_data)} bytes, total: {self.total_bytes_received} bytes")
+    except Exception as e:
+      LOG.exception(f"Error writing to temporary file {self._temp_file_path}")
+      self._cleanup_temp_file()
+      raise PopupException("Failed to buffer upload data: %s" % e, error_code=500)
+
+    return None
+
+  def file_complete(self, file_size):
+    # Close the temp file for writing
+    if self._temp_file and not self._temp_file.closed:
+      self._temp_file.close()
+
+    # Verify we received all data
+    if self.total_bytes_received != file_size:
+      LOG.error(f"OFS upload size mismatch - expected: {file_size} bytes, received: {self.total_bytes_received} bytes")
+      self._cleanup_temp_file()
+      raise PopupException(
+        "Upload data size mismatch: expected %d bytes, received %d bytes." % (file_size, self.total_bytes_received), error_code=422
+      )
+
+    try:
+      # Stream from temp file directly to Ozone
+      LOG.info("Creating file %s with %d bytes from temporary file" % (self.target_file_path, file_size))
+
+      # Open temp file for reading and pass the file handle
+      # The requests library will stream from the file handle automatically
+      with open(self._temp_file_path, "rb") as temp_file_handle:
+        self._fs.create(
+          self.target_file_path,
+          overwrite=False,  # We already handled overwrite above
+          permission=self._fs.getDefaultFilePerms(),  # Default file permissions
+          data=temp_file_handle,
+        )
+
+      # Verify the upload succeeded by getting the file stats
+      file_stats = self._fs.stats(self.target_file_path)
+
+      # Perform size verification explicitly
+      actual_size = file_stats.size
+
+      if actual_size != file_size:
+        LOG.error(
+          "OFS upload size mismatch after write for %s: expected %d bytes, got %d bytes" % (self.target_file_path, file_size, actual_size)
+        )
+
+        # Clean up the corrupted file
+        try:
+          self._fs.remove(self.target_file_path, skip_trash=True)
+        except Exception as cleanup_error:
+          LOG.warning("Failed to clean up corrupted file %s: %s" % (self.target_file_path, cleanup_error))
+
+        # Raise exception to fail the upload
+        raise PopupException(
+          "Upload verification failed: expected %d bytes, but only %d bytes were written. "
+          "The incomplete file has been removed." % (file_size, actual_size),
+          error_code=422,
+        )
+
+      LOG.info("OFS upload completed successfully: %d bytes written to %s" % (file_size, self.target_file_path))
+
+    except Exception as e:
+      LOG.exception('Error creating file "%s" in OFS' % self.target_file_path)
+
+      # Try to clean up if file was partially created
       try:
       try:
-        # Check access permissions before attempting upload
-        # self._check_access() # Not implemented
-        LOG.debug("Initiating OFS upload to target path: %s" % self.target_path)
-        self.file = SimpleUploadedFile(name=file_name, content='')
-        raise StopFutureHandlers()
-      except (OFSFileUploadError, WebHdfsException) as e:
-        LOG.error("Encountered error in OFSUploadHandler check_access: %s" % e)
-        raise StopUpload()
+        if self._fs.exists(self.target_file_path):
+          self._fs.remove(self.target_file_path, skip_trash=True)
+      except Exception:
+        pass
 
 
-  def _get_ofs(self, username):
-    fs = get_client(fs='ofs', user=username)
-    if not fs:
-      raise OFSFileUploadError(_("No OFS filesystem found."))
+      if isinstance(e, PopupException):
+        raise
+      else:
+        raise PopupException("Failed to upload file in OFS: %s" % str(e), error_code=500)
+    finally:
+      # Always clean up the temporary file
+      self._cleanup_temp_file()
 
 
-    return fs
+    file_stats = massage_stats(file_stats)
+    return file_stats
+
+  def _cleanup_temp_file(self):
+    """Clean up the temporary file if it exists."""
+    if self._temp_file and not self._temp_file.closed:
+      try:
+        self._temp_file.close()
+      except Exception:
+        pass
+
+    if self._temp_file_path:
+      try:
+        if os.path.exists(self._temp_file_path):
+          os.unlink(self._temp_file_path)
+          LOG.debug("Cleaned up temporary file: %s" % self._temp_file_path)
+      except Exception as e:
+        LOG.exception("Failed to clean up temporary file %s: %s" % (self._temp_file_path, e))

+ 4 - 5
desktop/core/src/desktop/lib/fs/proxyfs.py

@@ -24,9 +24,8 @@ from aws.s3.s3fs import get_s3_home_directory
 from azure.abfs.__init__ import get_abfs_home_directory
 from azure.abfs.__init__ import get_abfs_home_directory
 from azure.conf import is_raz_abfs
 from azure.conf import is_raz_abfs
 from desktop.auth.backend import is_admin
 from desktop.auth.backend import is_admin
-from desktop.conf import DEFAULT_USER, ENABLE_ORGANIZATIONS, is_ofs_enabled, is_raz_gs
+from desktop.conf import DEFAULT_USER, is_ofs_enabled, is_raz_gs
 from desktop.lib.fs.gc.gs import get_gs_home_directory
 from desktop.lib.fs.gc.gs import get_gs_home_directory
-from desktop.lib.fs.ozone import OFS_ROOT
 from useradmin.models import User
 from useradmin.models import User
 
 
 LOG = logging.getLogger()
 LOG = logging.getLogger()
@@ -301,9 +300,6 @@ class ProxyFS(object):
   def upload(self, file, path, *args, **kwargs):
   def upload(self, file, path, *args, **kwargs):
     self._get_fs(path).upload(file, path, *args, **kwargs)
     self._get_fs(path).upload(file, path, *args, **kwargs)
 
 
-  def upload_v1(self, META, input_data, destination, username):
-    self._get_fs(destination).upload_v1(META, input_data, destination, username)
-
   def check_access(self, path, *args, **kwargs):
   def check_access(self, path, *args, **kwargs):
     self._get_fs(path).check_access(path, *args, **kwargs)
     self._get_fs(path).check_access(path, *args, **kwargs)
 
 
@@ -312,3 +308,6 @@ class ProxyFS(object):
 
 
   def get_upload_chuck_size(self, path):
   def get_upload_chuck_size(self, path):
     return self._get_fs(path).get_upload_chuck_size()
     return self._get_fs(path).get_upload_chuck_size()
+
+  def get_upload_handler(self, destination_path, overwrite):
+    return self._get_fs(destination_path).get_upload_handler(destination_path, overwrite)

+ 10 - 23
desktop/core/src/desktop/middleware.py

@@ -17,31 +17,27 @@
 
 
 from __future__ import absolute_import
 from __future__ import absolute_import
 
 
-import os
-import re
-import json
-import time
-import socket
 import inspect
 import inspect
+import json
 import logging
 import logging
-import os.path
-import secrets
-import tempfile
 import mimetypes
 import mimetypes
+import secrets
+import socket
+import time
 import traceback
 import traceback
 from builtins import object
 from builtins import object
 from urllib.parse import quote, urlparse
 from urllib.parse import quote, urlparse
 
 
-import kerberos
 import django.db
 import django.db
-import django_prometheus
 import django.views.static
 import django.views.static
+import django_prometheus
+import kerberos
 from django.conf import settings
 from django.conf import settings
 from django.contrib import messages
 from django.contrib import messages
-from django.contrib.auth import BACKEND_SESSION_KEY, REDIRECT_FIELD_NAME, authenticate, load_backend, login
+from django.contrib.auth import authenticate, BACKEND_SESSION_KEY, load_backend, login, REDIRECT_FIELD_NAME
 from django.contrib.auth.middleware import RemoteUserMiddleware
 from django.contrib.auth.middleware import RemoteUserMiddleware
 from django.core import exceptions
 from django.core import exceptions
-from django.http import HttpResponse, HttpResponseForbidden, HttpResponseNotAllowed, HttpResponseRedirect
+from django.http import HttpResponse, HttpResponseNotAllowed, HttpResponseRedirect
 from django.urls import resolve
 from django.urls import resolve
 from django.utils.deprecation import MiddlewareMixin
 from django.utils.deprecation import MiddlewareMixin
 from django.utils.http import url_has_allowed_host_and_scheme
 from django.utils.http import url_has_allowed_host_and_scheme
@@ -56,17 +52,15 @@ from desktop.conf import (
   AUTH,
   AUTH,
   CSP_NONCE,
   CSP_NONCE,
   CUSTOM_CACHE_CONTROL,
   CUSTOM_CACHE_CONTROL,
-  DJANGO_DEBUG_MODE,
   ENABLE_PROMETHEUS,
   ENABLE_PROMETHEUS,
+  has_connectors,
   HTTP_ALLOWED_METHODS,
   HTTP_ALLOWED_METHODS,
   HUE_LOAD_BALANCER,
   HUE_LOAD_BALANCER,
+  is_gunicorn_report_enabled,
   KNOX,
   KNOX,
-  METRICS,
   REDIRECT_WHITELIST,
   REDIRECT_WHITELIST,
   SECURE_CONTENT_SECURITY_POLICY,
   SECURE_CONTENT_SECURITY_POLICY,
   SERVER_USER,
   SERVER_USER,
-  has_connectors,
-  is_gunicorn_report_enabled,
 )
 )
 from desktop.context_processors import get_app_name
 from desktop.context_processors import get_app_name
 from desktop.lib import apputil, fsmanager, i18n
 from desktop.lib import apputil, fsmanager, i18n
@@ -77,9 +71,7 @@ from desktop.lib.metrics import global_registry
 from desktop.lib.view_util import is_ajax
 from desktop.lib.view_util import is_ajax
 from desktop.log import get_audit_logger
 from desktop.log import get_audit_logger
 from desktop.log.access import access_log, access_warn, log_page_hit
 from desktop.log.access import access_log, access_warn, log_page_hit
-from hadoop import cluster
 from libsaml.conf import CDP_LOGOUT_URL
 from libsaml.conf import CDP_LOGOUT_URL
-from useradmin.models import User
 
 
 
 
 def nonce_exists(response):
 def nonce_exists(response):
@@ -231,11 +223,6 @@ class ClusterMiddleware(Django4MiddlewareAdapterMixin):
   """
   """
   Manages setting request.fs and request.jt
   Manages setting request.fs and request.jt
   """
   """
-  def process_request(self, request):
-    # Workaround to prevent RawPostDataException: Store the request body for later access
-    # This is necessary because certain API calls (like file uploads) require the raw request body
-    # to be available. Without this, subsequent accesses to request.body might raise exceptions.
-    request._body = request.body
 
 
   def process_view(self, request, view_func, view_args, view_kwargs):
   def process_view(self, request, view_func, view_args, view_kwargs):
     """
     """

+ 4 - 11
desktop/libs/aws/src/aws/s3/s3fs.py

@@ -26,7 +26,6 @@ from boto.exception import BotoClientError, S3ResponseError
 from boto.s3.connection import Location
 from boto.s3.connection import Location
 from boto.s3.key import Key
 from boto.s3.key import Key
 from boto.s3.prefix import Prefix
 from boto.s3.prefix import Prefix
-from django.http.multipartparser import MultiPartParser
 from django.utils.translation import gettext as _
 from django.utils.translation import gettext as _
 
 
 from aws import s3
 from aws import s3
@@ -631,16 +630,6 @@ class S3FileSystem(object):
   def upload(self, file, path, *args, **kwargs):
   def upload(self, file, path, *args, **kwargs):
     pass  # upload is handled by S3FileUploadHandler
     pass  # upload is handled by S3FileUploadHandler
 
 
-  @translate_s3_error
-  @auth_error_handler
-  def upload_v1(self, META, input_data, destination, username):
-    from aws.s3.upload import S3NewFileUploadHandler  # Circular dependency
-
-    s3_upload_handler = S3NewFileUploadHandler(destination, username)
-
-    parser = MultiPartParser(META, input_data, [s3_upload_handler])
-    return parser.parse()
-
   @translate_s3_error
   @translate_s3_error
   @auth_error_handler
   @auth_error_handler
   def append(self, path, data):
   def append(self, path, data):
@@ -671,3 +660,7 @@ class S3FileSystem(object):
   def get_upload_chuck_size(self):
   def get_upload_chuck_size(self):
     from hadoop.conf import UPLOAD_CHUNK_SIZE  # circular dependency
     from hadoop.conf import UPLOAD_CHUNK_SIZE  # circular dependency
     return UPLOAD_CHUNK_SIZE.get()
     return UPLOAD_CHUNK_SIZE.get()
+
+  def get_upload_handler(self, destination_path, overwrite):
+    from aws.s3.upload import S3NewFileUploadHandler
+    return S3NewFileUploadHandler(self, destination_path, overwrite)

+ 142 - 31
desktop/libs/aws/src/aws/s3/upload.py

@@ -22,6 +22,7 @@ See http://docs.djangoproject.com/en/1.9/topics/http/file-uploads/
 """
 """
 
 
 import logging
 import logging
+import os
 import unicodedata
 import unicodedata
 from io import BytesIO as stream_io
 from io import BytesIO as stream_io
 
 
@@ -34,7 +35,8 @@ from aws.s3.s3fs import S3FileSystemException
 from desktop.conf import TASK_SERVER_V2
 from desktop.conf import TASK_SERVER_V2
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.fsmanager import get_client
 from desktop.lib.fsmanager import get_client
-from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed
+from filebrowser.conf import MAX_FILE_SIZE_UPLOAD_LIMIT
+from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed, massage_stats
 
 
 DEFAULT_WRITE_SIZE = 1024 * 1024 * 128  # TODO: set in configuration (currently 128 MiB)
 DEFAULT_WRITE_SIZE = 1024 * 1024 * 128  # TODO: set in configuration (currently 128 MiB)
 
 
@@ -244,44 +246,153 @@ class S3FileUploadHandler(FileUploadHandler):
     return fp
     return fp
 
 
 
 
-class S3NewFileUploadHandler(S3FileUploadHandler):
+class S3NewFileUploadHandler(FileUploadHandler):
   """
   """
-  This handler uploads the file to AWS S3 if the destination path starts with "S3" (case insensitive).
-  Streams data chunks directly to S3.
+  Handles direct file uploads to Amazon S3 using multipart streaming.
+
+  This handler bypasses local storage and streams file chunks directly to S3,
+  enabling efficient handling of large files without memory constraints.
+
+  Key features:
+  - Multipart upload for reliability and parallel processing
+  - Streaming chunks directly to S3 (no temporary files)
+  - Comprehensive validation and security checks
+  - Automatic cleanup on failure
   """
   """
-  def __init__(self, dest_path, username):
+
+  def __init__(self, fs, dest_path, overwrite):
     self.chunk_size = DEFAULT_WRITE_SIZE
     self.chunk_size = DEFAULT_WRITE_SIZE
-    self.destination = dest_path
-    self.username = username
-    self.target_path = None
-    self.file = None
-    self._mp = None
-    self._part_num = 1
+    self._fs = fs
+    self.dest_path = dest_path
+    self.overwrite = overwrite
+    self.part_number = 1
+    self.multipart_upload = None
+    self.total_bytes_received = 0
 
 
-    # TODO: _is_s3_upload really required?
-    if self._is_s3_upload():
-      self._fs = get_client(fs='s3a', user=self.username)
-      self.bucket_name, self.key_name = parse_uri(self.destination)[:2]
+    self.bucket_name, self.key_name = parse_uri(self.dest_path)[:2]
 
 
-      self._bucket = self._fs._get_bucket(self.bucket_name)
+    self._bucket = self._fs._get_bucket(self.bucket_name)
+
+    LOG.info(f"S3NewFileUploadHandler initialized - destination: {dest_path}, overwrite: {overwrite}")
 
 
   def new_file(self, field_name, file_name, *args, **kwargs):
   def new_file(self, field_name, file_name, *args, **kwargs):
-    if self._is_s3_upload():
-      super(S3FileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
+    super(S3NewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
 
 
-      LOG.info('Using S3FileUploadHandler to handle file upload.')
-      self.target_path = self._fs.join(self.key_name, file_name)
+    LOG.info(f"Starting S3 upload for file: {file_name}")
 
 
-      try:
-        # Check access permissions before attempting upload
-        self._check_access()
+    # Validate upload prerequisites
+    self._validate_upload_prerequisites(file_name)
 
 
-        # Create a multipart upload request
-        LOG.debug("Initiating S3 multipart upload to target path: %s" % self.target_path)
-        self._mp = self._bucket.initiate_multipart_upload(self.target_path)
-        self.file = SimpleUploadedFile(name=file_name, content='')
+    self.target_key_path = self._fs.join(self.key_name, file_name)
 
 
-        raise StopFutureHandlers()
-      except (S3FileUploadError, S3FileSystemException) as e:
-        LOG.error("Encountered error in S3UploadHandler check_access: %s" % e)
-        raise StopUpload()
+    # Create a multipart upload request
+    try:
+      LOG.debug(f"Initiating S3 multipart upload to target path: {self.target_key_path}")
+      self.multipart_upload = self._bucket.initiate_multipart_upload(self.target_key_path)
+      LOG.info(f"Multipart upload initiated successfully for: {self.target_key_path}")
+    except Exception as e:
+      LOG.error(f"Failed to initiate S3 multipart upload for {self.target_key_path}: {e}")
+      raise PopupException(f"Failed to initiate S3 multipart upload to target path: {self.target_key_path}", error_code=500)
+
+  def _validate_upload_prerequisites(self, file_name):
+    """Validate all prerequisites before initiating file upload to S3.
+
+    Performs security and permission checks including:
+    - File extension restrictions
+    - Destination path existence and type validation
+    - Directory traversal attack prevention
+    - Write permission verification
+    - File overwrite handling based on policy
+
+    Args:
+      file_name: Name of the file to be uploaded.
+
+    Raises:
+      PopupException: With appropriate HTTP error codes:
+        - 400: Invalid file extension or filename
+        - 403: Insufficient permissions
+        - 404: Destination path not found
+        - 409: File exists and overwrite is disabled
+    """
+    LOG.debug(f"Validating upload prerequisites for file: {file_name}")
+
+    # Check file extension restrictions
+    is_allowed, err_message = is_file_upload_allowed(file_name)
+    if not is_allowed:
+      LOG.warning(f"File upload rejected - {err_message}")
+      raise PopupException(err_message, error_code=400)
+
+    # Check if the destination path already exists or not
+    if not self._fs.exists(self.dest_path):
+      LOG.error(f"Destination path does not exist: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} does not exist.", error_code=404)
+
+    # Check if the destination path is a directory or not
+    if not self._fs.isdir(self.dest_path):
+      LOG.error(f"Destination path is not a directory: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} is not a directory.", error_code=400)
+
+    # Check if the file name contains a path separator
+    # This prevents directory traversal attacks
+    if os.path.sep in file_name:
+      LOG.warning(f"Invalid filename with path separator: {file_name}")
+      raise PopupException("Invalid filename. Path separators are not allowed.", error_code=400)
+
+    # Check if the user has write access to the destination path
+    if not self._fs.check_access(self.dest_path, permission="WRITE"):
+      LOG.error(f"Insufficient permissions for destination: {self.dest_path}")
+      raise PopupException(f"Insufficient permissions to write to S3 path {self.dest_path}.", error_code=403)
+
+    # Check if the file already exists at the destination path
+    file_path = self._fs.join(self.dest_path, file_name)
+    if self._fs.exists(file_path):
+      if self.overwrite:
+        LOG.info(f"Overwriting existing file: {file_path}")
+        self._fs.remove(file_path)
+      else:
+        LOG.warning(f"File already exists and overwrite is disabled: {file_path}")
+        raise PopupException(f"The file {file_name} already exists at the destination path.", error_code=409)
+
+    LOG.debug("Upload prerequisites validation completed successfully")
+
+  def receive_data_chunk(self, raw_data, start):
+    self.total_bytes_received += len(raw_data)
+    max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
+
+    # Perform max size check on the fly
+    if max_size != -1 and max_size >= 0 and self.total_bytes_received > max_size:
+      LOG.error(f"File size exceeded limit - received: {self.total_bytes_received}, max: {max_size}")
+      raise PopupException(f"File exceeds maximum allowed size of {max_size} bytes.", error_code=413)
+
+    # This chunk must be uploaded by the child class
+    self.upload_chunk(raw_data)
+    return None  # Return None to signal you are handling the data
+
+  def upload_chunk(self, raw_chunk):
+    try:
+      LOG.debug(f"Uploading part {self.part_number}, size: {len(raw_chunk)} bytes")
+      self.multipart_upload.upload_part_from_file(fp=self._get_file_part(raw_chunk), part_num=self.part_number)
+      self.part_number += 1
+    except Exception as e:
+      LOG.error(f"Failed to upload part {self.part_number}: {e}")
+      self.multipart_upload.cancel_upload()
+      raise PopupException(f"Failed to upload part: {e}", error_code=500)
+
+  def _get_file_part(self, raw_chunk):
+    fp = stream_io()
+    fp.write(raw_chunk)
+    fp.seek(0)
+    return fp
+
+  def file_complete(self, file_size):
+    # Finish the upload
+    LOG.info(f"Completing multipart upload - total size: {file_size} bytes, parts: {self.part_number - 1}")
+    self.multipart_upload.complete_upload()
+
+    file_stats = self._fs.stats(f"s3a://{self.bucket_name}/{self.target_key_path}")
+
+    LOG.info(f"Upload completed successfully: {self.target_key_path}")
+
+    file_stats = massage_stats(file_stats)
+
+    return file_stats

+ 112 - 15
desktop/libs/azure/src/azure/abfs/abfs.py

@@ -22,11 +22,11 @@ Interfaces for ABFS
 import logging
 import logging
 import os
 import os
 import threading
 import threading
+import time
+import uuid
 from builtins import object
 from builtins import object
 from urllib.parse import quote as urllib_quote, urlparse as lib_urlparse
 from urllib.parse import quote as urllib_quote, urlparse as lib_urlparse
 
 
-from django.http.multipartparser import MultiPartParser
-
 import azure.abfs.__init__ as Init_ABFS
 import azure.abfs.__init__ as Init_ABFS
 from azure.abfs.abfsfile import ABFSFile
 from azure.abfs.abfsfile import ABFSFile
 from azure.abfs.abfsstats import ABFSStat
 from azure.abfs.abfsstats import ABFSStat
@@ -441,15 +441,18 @@ class ABFS(object):
       params = {'position': int(resp['Content-Length']) + offset, 'action': 'append'}
       params = {'position': int(resp['Content-Length']) + offset, 'action': 'append'}
     else:
     else:
       params['action'] = 'append'
       params['action'] = 'append'
+
     headers = {}
     headers = {}
+    actual_data = data.getvalue() if hasattr(data, 'getvalue') else data
+
     if size == 0 or size == '0':
     if size == 0 or size == '0':
-      headers['Content-Length'] = str(len(data.getvalue()))
+      headers['Content-Length'] = str(len(actual_data))
       if headers['Content-Length'] == '0':
       if headers['Content-Length'] == '0':
         return
         return
     else:
     else:
       headers['Content-Length'] = str(size)
       headers['Content-Length'] = str(size)
 
 
-    return self._patching_sl(path, params, data, headers, **kwargs)
+    return self._patching_sl(path, params, actual_data, headers, **kwargs)
 
 
   def flush(self, path, params=None, headers=None, **kwargs):
   def flush(self, path, params=None, headers=None, **kwargs):
     """
     """
@@ -611,14 +614,6 @@ class ABFS(object):
     """
     """
     pass
     pass
 
 
-  def upload_v1(self, META, input_data, destination, username):
-    from azure.abfs.upload import ABFSNewFileUploadHandler  # Circular dependency
-
-    abfs_upload_handler = ABFSNewFileUploadHandler(destination, username)
-
-    parser = MultiPartParser(META, input_data, [abfs_upload_handler])
-    return parser.parse()
-
   def copyFromLocal(self, local_src, remote_dst, *args, **kwargs):
   def copyFromLocal(self, local_src, remote_dst, *args, **kwargs):
     """
     """
     Copy a directory or file from Local (Testing)
     Copy a directory or file from Local (Testing)
@@ -676,11 +671,109 @@ class ABFS(object):
     else:
     else:
       LOG.info(f'Skipping {local_src} (not a file).')
       LOG.info(f'Skipping {local_src} (not a file).')
 
 
-  def check_access(self, path, *args, **kwargs):
+  def check_access(self, path, permission="READ"):
     """
     """
-    Check access of a file/directory (Work in Progress/Not Ready)
+    Check if the user has the requested permission for a given path.
+
+    This method verifies access by attempting operations that would require the
+    specified permission level. It handles both files and directories gracefully.
+
+    Args:
+      path (str): The ABFS path to check access for
+      permission (str): Permission type to check - 'READ' or 'WRITE' (case-insensitive)
+
+    Returns:
+      bool: True if user has the requested permission, False otherwise
+
+    Note:
+      - For READ permission: Checks if path exists and tries to access its metadata
+      - For WRITE permission: For directories, attempts to create a temporary file;
+        for files or non-existent paths, checks parent directory write access
     """
     """
-    raise NotImplementedError("")
+    permission = permission.upper()
+
+    if permission not in ("READ", "WRITE"):
+      LOG.warning(f'Invalid permission type "{permission}". Must be READ or WRITE.')
+      return False
+
+    try:
+      if permission == "READ":
+        # For read access, we need to verify the path exists and is accessible
+        if not self.exists(path):
+          LOG.debug(f'Path "{path}" does not exist, cannot read.')
+          return False
+
+        try:
+          if self.isdir(path):
+            # For directories, attempt to list contents
+            self.listdir_stats(path, params={"maxResults": 1})  # Limit results for efficiency
+          else:
+            # For files, get file stats
+            self.stats(path)
+          return True
+        except WebHdfsException as e:
+          if e.code in (401, 403):  # Unauthorized or Forbidden
+            LOG.debug(f'No read permission for path "{path}": {str(e)}')
+            return False
+          # Re-raise unexpected errors
+          raise
+
+      # Check WRITE permission
+      else:
+        # For non-existent paths, check parent directory
+        if not self.exists(path):
+          parent = self.parent_path(path)
+
+          # If we can't determine parent or we're at root, deny access
+          if not parent or parent == path:
+            LOG.debug(f'Cannot determine parent for non-existent path "{path}"')
+            return False
+
+          # Recursively check parent write access
+          return self.check_access(parent, permission="WRITE")
+
+        # For existing paths
+        if self.isdir(path):
+          # For directories, try creating a temporary marker file
+          temp_file = None
+          try:
+            # Generate unique temporary filename with timestamp
+            temp_file = self.join(path, f".hue_access_check_{str(int(time.time() * 1000))}_{str(uuid.uuid4())[:8]}")
+
+            # Attempt to create the temporary file
+            self.create(temp_file, overwrite=True, data="")
+
+            # Clean up the temporary file if creation succeeded
+            try:
+              self.remove(temp_file)
+            except Exception as cleanup_error:
+              LOG.warning(f'Failed to clean up temporary file "{temp_file}": {cleanup_error}')
+
+            return True
+
+          except WebHdfsException as e:
+            if e.code in (401, 403):  # Unauthorized or Forbidden
+              LOG.debug(f'No write permission for directory "{path}": {str(e)}')
+              return False
+            # Re-raise unexpected errors
+            raise
+
+        else:
+          # For files, check write permission on parent directory
+          parent = self.parent_path(path)
+          if parent and parent != path:
+            return self.check_access(parent, permission="WRITE")
+          else:
+            LOG.debug(f'Cannot check write access for file "{path}", no valid parent found')
+            return False
+
+    except ABFSFileSystemException as e:
+      LOG.debug(f'ABFS filesystem error checking {permission} permission at path "{path}": {str(e)}')
+      return False
+    except Exception as e:
+      # Log unexpected errors but don't crash
+      LOG.warning(f'Unexpected error checking {permission} permission at path "{path}": {str(e)}')
+      return False
 
 
   def mkswap(self, filename, subdir='', suffix='swp', basedir=None):
   def mkswap(self, filename, subdir='', suffix='swp', basedir=None):
     """
     """
@@ -705,6 +798,10 @@ class ABFS(object):
     """
     """
     return UPLOAD_CHUCK_SIZE
     return UPLOAD_CHUCK_SIZE
 
 
+  def get_upload_handler(self, destination_path, overwrite):
+    from azure.abfs.upload import ABFSNewFileUploadHandler
+    return ABFSNewFileUploadHandler(self, destination_path, overwrite)
+
   def filebrowser_action(self):
   def filebrowser_action(self):
     return self._filebrowser_action
     return self._filebrowser_action
 
 

+ 147 - 34
desktop/libs/azure/src/azure/abfs/upload.py

@@ -15,6 +15,7 @@
 # limitations under the License.
 # limitations under the License.
 
 
 import logging
 import logging
+import os
 import unicodedata
 import unicodedata
 from io import BytesIO
 from io import BytesIO
 
 
@@ -27,7 +28,8 @@ from azure.abfs.abfs import ABFSFileSystemException
 from desktop.conf import TASK_SERVER_V2
 from desktop.conf import TASK_SERVER_V2
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.fsmanager import get_client
 from desktop.lib.fsmanager import get_client
-from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed
+from filebrowser.conf import MAX_FILE_SIZE_UPLOAD_LIMIT
+from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed, massage_stats
 
 
 DEFAULT_WRITE_SIZE = 100 * 1024 * 1024  # As per Azure doc, maximum blob size is 100MB
 DEFAULT_WRITE_SIZE = 100 * 1024 * 1024  # As per Azure doc, maximum blob size is 100MB
 
 
@@ -256,49 +258,160 @@ class ABFSFileUploadHandler(FileUploadHandler):
       return None
       return None
 
 
 
 
-class ABFSNewFileUploadHandler(ABFSFileUploadHandler):
+class ABFSNewFileUploadHandler(FileUploadHandler):
   """
   """
-  This handler uploads the file to ABFS if the destination path starts with "ABFS" (case insensitive).
-  Streams data chunks directly to ABFS.
+  Handles direct file uploads to Azure Blob File System using streaming append operations.
+
+  This handler creates the file directly in ABFS and appends chunks as they arrive,
+  leveraging ABFS's append capabilities for efficient streaming uploads.
+
+  Key features:
+  - Direct streaming to ABFS (no temporary files)
+  - Uses ABFS append API with position-based writes
+  - Flush operation ensures data persistence
+  - Comprehensive validation and security checks
   """
   """
 
 
-  def __init__(self, dest_path, username):
+  def __init__(self, fs, dest_path, overwrite):
     self.chunk_size = DEFAULT_WRITE_SIZE
     self.chunk_size = DEFAULT_WRITE_SIZE
-    self.target_path = None
-    self.file = None
-    self._part_size = DEFAULT_WRITE_SIZE
+    self._fs = fs
+    self.dest_path = dest_path
+    self.overwrite = overwrite
+    self.total_bytes_received = 0
 
 
-    self.destination = dest_path
-    self.username = username
+    LOG.info(f"ABFSNewFileUploadHandler initialized - destination: {dest_path}, overwrite: {overwrite}")
 
 
-    # TODO: _is_abfs_upload really required?
-    if self._is_abfs_upload():
-      self._fs = self._get_abfs(self.username)
-      self.filesystem, self.directory = parse_uri(self.destination)[:2]
+  def new_file(self, field_name, file_name, *args, **kwargs):
+    super(ABFSNewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
 
 
-    LOG.debug("Chunk size = %d" % DEFAULT_WRITE_SIZE)
+    LOG.info(f"Starting ABFS upload for file: {file_name}")
 
 
-  def new_file(self, field_name, file_name, *args, **kwargs):
-    if self._is_abfs_upload():
-      super(ABFSFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
+    # Validate upload prerequisites
+    self._validate_upload_prerequisites(file_name)
 
 
-      LOG.info('Using ABFSFileUploadHandler to handle file upload wit temp file%s.' % file_name)
-      self.target_path = self._fs.join(self.destination, file_name)
+    self.target_path = self._fs.join(self.dest_path, file_name)
+
+    # Create the file
+    try:
+      LOG.debug(f"Creating ABFS file at: {self.target_path}")
+      self._fs.create(self.target_path)
+      LOG.info(f"ABFS file created successfully: {self.target_path}")
+    except Exception as e:
+      LOG.error(f"Failed to create ABFS file for upload: {e}")
+      raise PopupException(f"Failed to initiate ABFS upload to target path: {self.target_path}", error_code=500)
+
+  def _validate_upload_prerequisites(self, file_name):
+    """Validate all prerequisites before initiating file upload to ABFS.
+
+    Performs security and permission checks including:
+    - File extension restrictions
+    - Destination path existence and type validation
+    - Directory traversal attack prevention
+    - Write permission verification
+    - File overwrite handling based on policy
+
+    Args:
+      file_name: Name of the file to be uploaded.
+
+    Raises:
+      PopupException: With appropriate HTTP error codes:
+        - 400: Invalid file extension or filename
+        - 403: Insufficient permissions
+        - 404: Destination path not found
+        - 409: File exists and overwrite is disabled
+    """
+    LOG.debug(f"Validating upload prerequisites for file: {file_name}")
+
+    # Check file extension restrictions
+    is_allowed, err_message = is_file_upload_allowed(file_name)
+    if not is_allowed:
+      LOG.warning(f"File upload rejected - {err_message}")
+      raise PopupException(err_message, error_code=400)
+
+    # Check if the destination path already exists or not
+    if not self._fs.exists(self.dest_path):
+      LOG.error(f"Destination path does not exist: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} does not exist.", error_code=404)
+
+    # Check if the destination path is a directory or not
+    if not self._fs.isdir(self.dest_path):
+      LOG.error(f"Destination path is not a directory: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} is not a directory.", error_code=400)
+
+    # Check if the file name contains a path separator
+    # This prevents directory traversal attacks
+    if os.path.sep in file_name:
+      LOG.warning(f"Invalid filename with path separator: {file_name}")
+      raise PopupException("Invalid filename. Path separators are not allowed.", error_code=400)
+
+    # Check if the user has write access to the destination path
+    if not self._fs.check_access(self.dest_path, permission="WRITE"):
+      LOG.error(f"Insufficient permissions for destination: {self.dest_path}")
+      raise PopupException(f"Insufficient permissions to write to ABFS path {self.dest_path}.", error_code=403)
+
+    # Check if the file already exists at the destination path
+    target_path = self._fs.join(self.dest_path, file_name)
+    if self._fs.exists(target_path):
+      if self.overwrite:
+        LOG.info(f"Overwriting existing file: {target_path}")
+        self._fs.remove(target_path)
+      else:
+        LOG.warning(f"File already exists and overwrite is disabled: {target_path}")
+        raise PopupException(f"The file {file_name} already exists at the destination path.", error_code=409)
+
+    LOG.debug("Upload prerequisites validation completed successfully")
+
+  def receive_data_chunk(self, raw_data, start):
+    self.total_bytes_received += len(raw_data)
+    max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
+
+    # Perform max size check on the fly
+    if max_size != -1 and max_size >= 0 and self.total_bytes_received > max_size:
+      LOG.error(f"File size exceeded limit - received: {self.total_bytes_received}, max: {max_size}")
+      raise PopupException(f"File exceeds maximum allowed size of {max_size} bytes.", error_code=413)
+
+    # Upload the chunk
+    self.upload_chunk(raw_data, start)
+    return None
 
 
+  def upload_chunk(self, raw_chunk, start):
+    try:
+      LOG.debug(f"Appending chunk to ABFS file - position: {start}, size: {len(raw_chunk)} bytes")
+      buffered_data = BytesIO(raw_chunk)
+      # TODO: Try encapsulating the _append method in the ABFS class with correct refactoring
+      self._fs._append(self.target_path, buffered_data, params={"position": int(start)})
+    except Exception as e:
+      LOG.error(f"Failed to append chunk at position {start}: {e}")
+      self._fs.remove(self.target_path)
+      raise PopupException(f"Failed to upload part: {e}", error_code=500)
+
+  def file_complete(self, file_size):
+    # Finish the upload by flushing
+    LOG.info(f"Flushing ABFS file - total size: {file_size} bytes")
+    self._fs.flush(self.target_path, {"position": int(file_size)})
+
+    file_stats = self._fs.stats(self.target_path)
+
+    # Perform size verification explicitly
+    actual_size = file_stats.size if hasattr(file_stats, "size") else file_stats.get("size", 0)
+    if actual_size != file_size:
+      LOG.error(f"ABFS upload size mismatch for {self.target_path}: expected {file_size} bytes, got {actual_size} bytes")
+
+      # Clean up the corrupted file
       try:
       try:
-        # Check access permissions before attempting upload
-        # self._check_access() #implement later
-        LOG.debug("Initiating ABFS upload to target path: %s" % self.target_path)
-        self._fs.create(self.target_path)
-        self.file = SimpleUploadedFile(name=file_name, content='')
-        raise StopFutureHandlers()
-      except (ABFSFileUploadError, ABFSFileSystemException) as e:
-        LOG.error("Encountered error in ABFSUploadHandler check_access: %s" % e)
-        raise StopUpload()
+        self._fs.remove(self.target_path)
+        LOG.info(f"Successfully cleaned up corrupted file: {self.target_path}")
+      except Exception as cleanup_error:
+        LOG.warning(f"Failed to clean up corrupted file {self.target_path}: {cleanup_error}")
 
 
-  def _get_abfs(self, username):
-    fs = get_client(fs='abfs', user=username)
-    if not fs:
-      raise ABFSFileUploadError(_("No ABFS filesystem found"))
+      raise PopupException(
+        f"Upload verification failed: expected {file_size} bytes, but only {actual_size} bytes were written. "
+        f"The incomplete file has been removed.",
+        error_code=422,
+      )
 
 
-    return fs
+    LOG.info(f"ABFS upload completed successfully: {file_size} bytes written to {self.target_path}")
+
+    file_stats = massage_stats(file_stats)
+
+    return file_stats

+ 136 - 107
desktop/libs/hadoop/src/hadoop/fs/upload.py

@@ -34,9 +34,8 @@ from django.utils.translation import gettext as _
 import hadoop.cluster
 import hadoop.cluster
 from desktop.lib import fsmanager
 from desktop.lib import fsmanager
 from desktop.lib.exceptions_renderable import PopupException
 from desktop.lib.exceptions_renderable import PopupException
-from desktop.lib.fsmanager import get_client
-from filebrowser.conf import ARCHIVE_UPLOAD_TEMPDIR
-from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed
+from filebrowser.conf import ARCHIVE_UPLOAD_TEMPDIR, MAX_FILE_SIZE_UPLOAD_LIMIT
+from filebrowser.utils import calculate_total_size, generate_chunks, is_file_upload_allowed, massage_stats
 from hadoop.conf import UPLOAD_CHUNK_SIZE
 from hadoop.conf import UPLOAD_CHUNK_SIZE
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.exceptions import WebHdfsException
 
 
@@ -417,132 +416,162 @@ class HDFSfileUploadHandler(FileUploadHandler):
 
 
 class HDFSNewFileUploadHandler(FileUploadHandler):
 class HDFSNewFileUploadHandler(FileUploadHandler):
   """
   """
-  Handle file upload by storing data in a temp HDFS file.
+  Handles direct file uploads to HDFS using streaming append operations.
+
+  This handler creates the file directly in HDFS and appends chunks as they arrive,
+  leveraging HDFS's native append capabilities for efficient streaming uploads.
+
+  Key features:
+  - Direct streaming to HDFS (no temporary files)
+  - Uses HDFS append API for chunk-by-chunk uploads
+  - Automatic cleanup on failure
+  - Comprehensive validation and security checks
   """
   """
-  def __init__(self, dest_path, username):
-    self.chunk_size = UPLOAD_CHUNK_SIZE.get()
-    self._file = None
-    self._starttime = 0
-    self._destination = dest_path
-    self.username = username
 
 
-    self._fs = self._get_hdfs(self.username)
+  def __init__(self, fs, dest_path, overwrite):
+    self.chunk_size = UPLOAD_CHUNK_SIZE.get()
+    self._fs = fs
+    self.dest_path = dest_path
+    self.overwrite = overwrite
+    self.total_bytes_received = 0
+    self.target_file_path = None
 
 
-    LOG.debug("Chunk size = %d" % self.chunk_size)
+    LOG.info(f"HDFSNewFileUploadHandler initialized - destination: {dest_path}, overwrite: {overwrite}")
 
 
   def new_file(self, field_name, file_name, *args, **kwargs):
   def new_file(self, field_name, file_name, *args, **kwargs):
     super(HDFSNewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
     super(HDFSNewFileUploadHandler, self).new_file(field_name, file_name, *args, **kwargs)
 
 
-    LOG.info('Using HDFSfileUploadHandler to handle file upload.')
-    try:
-      self._file = HDFSNewTemporaryUploadedFile(self._fs, file_name, self._destination, self.username)
-      LOG.debug('Upload attempt to %s' % (self._file.get_temp_path()))
+    LOG.info(f"Starting HDFS upload for file: {file_name}")
 
 
-      self._starttime = time.time()
-    except Exception as ex:
-      LOG.error("Not using HDFS upload handler: %s" % (ex))
-      raise ex
+    # Validate upload prerequisites
+    self._validate_upload_prerequisites(file_name)
 
 
-    raise StopFutureHandlers()
-
-  def receive_data_chunk(self, raw_data, start):
-    LOG.debug("HDFSfileUploadHandler receive_data_chunk")
+    self.target_file_path = self._fs.join(self.dest_path, file_name)
 
 
+    # Create the file directly at the destination
     try:
     try:
-      self._file.write(raw_data)
-      self._file.flush()
-      return None
-    except IOError:
-      LOG.exception('Error storing upload data in temporary file "%s"' % (self._file.get_temp_path()))
-      raise StopUpload()
-
-  def file_complete(self, file_size):
-    try:
-      self._file.finish_upload(file_size)
-    except IOError:
-      LOG.exception('Error closing uploaded temporary file "%s"' % (self._file.get_temp_path()))
-      raise
-
-    elapsed = time.time() - self._starttime
-    LOG.info('Uploaded %s bytes to HDFS in %s seconds' % (file_size, elapsed))
-    return self._file
-
-  def upload_complete(self):
-    LOG.debug("HDFSFileUploadHandler: Running after upload complete task")
-    original_file_path = self._fs.join(self._destination, self._file.name)
-    tmp_file = self._file.get_temp_path()
-
-    self._fs.do_as_user(self.username, self._fs.rename, tmp_file, original_file_path)
-
-  def upload_interrupted(self):
-    LOG.debug("HDFSFileUploadHandler: Attempting cleanup after upload interruption")
-    if self._file and hasattr(self._file, 'remove'):
-      self._file.remove()
-
-  def _get_hdfs(self, username):
-    fs = get_client(fs='hdfs', user=username)
-    if not fs:
-      raise HDFSerror(_("No HDFS found for upload operation."))
-
-    return fs
+      LOG.debug(f"Creating HDFS file at: {self.target_file_path}")
+      self._fs.create(
+        self.target_file_path,
+        overwrite=False,  # We already handled overwrite above
+        permission=self._fs.getDefaultFilePerms(),
+      )
+      LOG.info(f"HDFS file created successfully: {self.target_file_path}")
+    except Exception as ex:
+      LOG.error(f"Failed to create HDFS file for upload: {ex}")
+      raise PopupException(f"Failed to initiate HDFS upload: {ex}", error_code=500)
 
 
+  def _validate_upload_prerequisites(self, file_name):
+    """Validate all prerequisites before initiating file upload to HDFS.
 
 
-class HDFSNewTemporaryUploadedFile(object):
-  """
-  A temporary HDFS file to store upload data.
-  This class does not have any file read methods.
-  """
-  def __init__(self, fs, name, destination, username):
-    self.name = name
-    self.size = None
-    self._do_cleanup = False
-    self._fs = fs
+    Performs security and permission checks including:
+    - File extension restrictions
+    - Destination path existence and type validation
+    - Directory traversal attack prevention
+    - Write permission verification
+    - File overwrite handling based on policy
 
 
-    self._path = self._fs.mkswap(name, suffix='tmp', basedir=destination)
+    Args:
+      file_name: Name of the file to be uploaded.
+
+    Raises:
+      PopupException: With appropriate HTTP error codes:
+        - 400: Invalid file extension or filename
+        - 403: Insufficient permissions
+        - 404: Destination path not found
+        - 409: File exists and overwrite is disabled
+    """
+    LOG.debug(f"Validating upload prerequisites for file: {file_name}")
 
 
-    # Check access permissions before attempting upload
+    # Check file extension restrictions
+    is_allowed, err_message = is_file_upload_allowed(file_name)
+    if not is_allowed:
+      LOG.warning(f"File upload rejected - {err_message}")
+      raise PopupException(err_message, error_code=400)
+
+    # Check if the destination path already exists or not
+    if not self._fs.exists(self.dest_path):
+      LOG.error(f"Destination path does not exist: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} does not exist.", error_code=404)
+
+    # Check if the destination path is a directory or not
+    if not self._fs.isdir(self.dest_path):
+      LOG.error(f"Destination path is not a directory: {self.dest_path}")
+      raise PopupException(f"The destination path {self.dest_path} is not a directory.", error_code=400)
+
+    # Check if the file name contains a path separator
+    # This prevents directory traversal attacks
+    if os.path.sep in file_name:
+      LOG.warning(f"Invalid filename with path separator: {file_name}")
+      raise PopupException("Invalid filename. Path separators are not allowed.", error_code=400)
+
+    # Check if the user has write access to the destination path
     try:
     try:
-      self._fs.check_access(destination, 'rw-')
-    except WebHdfsException:
-      raise HDFSerror(_('User %s does not have permissions to write to path "%s".') % (username, destination))
-
-    if self._fs.exists(self._path):
-      self._fs._delete(self._path)
+      self._fs.check_access(self.dest_path, "rw-")
+    except WebHdfsException as e:
+      LOG.error(f"Error checking access to path {self.dest_path}: {e}")
+      raise PopupException(f"Insufficient permissions to write to path {self.dest_path}.", error_code=403)
+
+    # Check if the file already exists at the destination path
+    target_file_path = self._fs.join(self.dest_path, file_name)
+    if self._fs.exists(target_file_path):
+      if self.overwrite:
+        LOG.info(f"Overwriting existing file: {target_file_path}")
+        self._fs.remove(target_file_path, skip_trash=True)
+      else:
+        LOG.warning(f"File already exists and overwrite is disabled: {target_file_path}")
+        raise PopupException(f"The file {file_name} already exists at the destination path.", error_code=409)
 
 
-    self._file = self._fs.open(self._path, 'w')
+    LOG.debug("Upload prerequisites validation completed successfully")
 
 
-    self._do_cleanup = True
-
-  def __del__(self):
-    if self._do_cleanup:
-      # Do not do cleanup here. It's hopeless. The self._fs threadlocal states
-      # are going to be all wrong.
-      LOG.debug(f"Check for left-over upload file for cleanup if the upload op was unsuccessful: {self._path}")
+  def receive_data_chunk(self, raw_data, start):
+    self.total_bytes_received += len(raw_data)
+    max_size = MAX_FILE_SIZE_UPLOAD_LIMIT.get()
 
 
-  def get_temp_path(self):
-    return self._path
+    # Perform max size check on the fly
+    if max_size != -1 and max_size >= 0 and self.total_bytes_received > max_size:
+      LOG.error(f"File size exceeded limit - received: {self.total_bytes_received}, max: {max_size}")
+      raise PopupException(f"File exceeds maximum allowed size of {max_size} bytes.", error_code=413)
 
 
-  def finish_upload(self, size):
+    # Append the chunk directly to the destination file
     try:
     try:
-      self.size = size
-      self.close()
-    except Exception:
-      LOG.exception('Error uploading file to %s' % (self._path))
-      raise
+      LOG.debug(f"Appending chunk to HDFS file - size: {len(raw_data)} bytes, total: {self.total_bytes_received} bytes")
+      self._fs.append(self.target_file_path, raw_data)
+      return None
+    except Exception as e:
+      LOG.exception(f'Error appending data to file "{self.target_file_path}"')
+      try:  # Try to clean up the partial file
+        LOG.info(f"Attempting to clean up partial file: {self.target_file_path}")
+        self._fs.remove(self.target_file_path, skip_trash=True)
+      except Exception:
+        pass
 
 
-  def remove(self):
-    try:
-      self._fs.remove(self._path, skip_trash=True)
-      self._do_cleanup = False
-    except IOError as ex:
-      if ex.errno != errno.ENOENT:
-        LOG.exception('Failed to remove temporary upload file "%s". Please cleanup manually: %s' % (self._path, ex))
+      raise PopupException(f"Failed to write upload data: {e}", error_code=500)
 
 
-  def write(self, data):
-    self._file.write(data)
+  def file_complete(self, file_size):
+    # Get file stats
+    file_stats = self._fs.stats(self.target_file_path)
 
 
-  def flush(self):
-    self._file.flush()
+    # Perform size verification explicitly
+    actual_size = file_stats.size
+    if actual_size != file_size:
+      LOG.error(f"HDFS upload size mismatch for {self.target_file_path}: expected {file_size} bytes, got {actual_size} bytes")
 
 
-  def close(self):
-    self._file.close()
+      # Clean up the corrupted file
+      try:
+        self._fs.remove(self.target_file_path, skip_trash=True)
+        LOG.info(f"Successfully cleaned up corrupted file: {self.target_file_path}")
+      except Exception as cleanup_error:
+        LOG.warning(f"Failed to clean up corrupted file {self.target_file_path}: {cleanup_error}")
+
+      # Raise exception to fail the upload
+      raise PopupException(
+        f"Upload verification failed: expected {file_size} bytes, but only {actual_size} bytes were written. "
+        f"The incomplete file has been removed.",
+        error_code=422,
+      )
+    else:
+      LOG.info(f"Upload completed successfully: {self.target_file_path}, size: {file_size} bytes")
+
+    file_stats = massage_stats(file_stats)
+    return file_stats

+ 30 - 24
desktop/libs/hadoop/src/hadoop/fs/webhdfs.py

@@ -19,26 +19,23 @@
 Interfaces for Hadoop filesystem access via HttpFs/WebHDFS
 Interfaces for Hadoop filesystem access via HttpFs/WebHDFS
 """
 """
 
 
-import stat
-import time
 import errno
 import errno
 import logging
 import logging
 import posixpath
 import posixpath
+import stat
 import threading
 import threading
-import urllib.error
-import urllib.request
+import time
 from builtins import object, oct
 from builtins import object, oct
 from urllib.parse import unquote as urllib_unquote, urlparse
 from urllib.parse import unquote as urllib_unquote, urlparse
 
 
-from django.http.multipartparser import MultiPartParser
 from django.utils.encoding import smart_str
 from django.utils.encoding import smart_str
 from django.utils.translation import gettext as _
 from django.utils.translation import gettext as _
 from past.builtins import long
 from past.builtins import long
 
 
-import hadoop.conf
 import desktop.conf
 import desktop.conf
+import hadoop.conf
 from desktop.lib.rest import http_client, resource
 from desktop.lib.rest import http_client, resource
-from hadoop.fs import SEEK_CUR, SEEK_END, SEEK_SET, normpath as fs_normpath
+from hadoop.fs import normpath as fs_normpath, SEEK_CUR, SEEK_END, SEEK_SET
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.exceptions import WebHdfsException
 from hadoop.fs.hadoopfs import Hdfs
 from hadoop.fs.hadoopfs import Hdfs
 from hadoop.fs.webhdfs_types import WebHdfsContentSummary, WebHdfsStat
 from hadoop.fs.webhdfs_types import WebHdfsContentSummary, WebHdfsStat
@@ -217,7 +214,6 @@ class WebHdfs(Hdfs):
     return curr
     return curr
 
 
   def is_absolute(self, path):
   def is_absolute(self, path):
-    length = len(self._scheme)
     return path.startswith(self._scheme) if self._scheme else path == '/'
     return path.startswith(self._scheme) if self._scheme else path == '/'
 
 
   def strip_normpath(self, path):
   def strip_normpath(self, path):
@@ -592,6 +588,20 @@ class WebHdfs(Hdfs):
     return File(self, path, mode)
     return File(self, path, mode)
 
 
   def getDefaultFilePerms(self):
   def getDefaultFilePerms(self):
+    """
+    Calculate the default file permissions after applying the umask.
+
+    Files are created with default permissions of 0o666 (rw-rw-rw-).
+    The umask defines which permission bits to disable. This method
+    masks out those bits to compute the final permissions using:
+
+      0o666 & (0o1777 ^ umask)
+
+    The umask is set to 0o022 by default, which means that the default file permissions are 0o644 (rw-r--r--).
+
+    Returns:
+      int: Final permission bits for a new file (e.g., 0o644)
+    """
     return 0o666 & (0o1777 ^ self._umask)
     return 0o666 & (0o1777 ^ self._umask)
 
 
   def getDefaultDirPerms(self):
   def getDefaultDirPerms(self):
@@ -742,10 +752,10 @@ class WebHdfs(Hdfs):
     if not self.exists(destination):
     if not self.exists(destination):
       self.do_as_user(owner, self.mkdir, destination, mode=dir_mode)
       self.do_as_user(owner, self.mkdir, destination, mode=dir_mode)
 
 
-    for stat in self.listdir_stats(source):
-      source_file = stat.path
-      destination_file = posixpath.join(destination, stat.name)
-      if stat.isDir:
+    for s in self.listdir_stats(source):
+      source_file = s.path
+      destination_file = posixpath.join(destination, s.name)
+      if s.isDir:
         self.copy_remote_dir(source_file, destination_file, dir_mode, owner)
         self.copy_remote_dir(source_file, destination_file, dir_mode, owner)
       else:
       else:
         self.do_as_user(owner, self.copyfile, source_file, destination_file)
         self.do_as_user(owner, self.copyfile, source_file, destination_file)
@@ -889,11 +899,11 @@ class WebHdfs(Hdfs):
     return self.do_as_user(self.superuser, fn, *args, **kwargs)
     return self.do_as_user(self.superuser, fn, *args, **kwargs)
 
 
   def do_recursively(self, fn, path, *args, **kwargs):
   def do_recursively(self, fn, path, *args, **kwargs):
-    for stat in self.listdir_stats(path):
+    for s in self.listdir_stats(path):
       try:
       try:
-        if stat.isDir:
-          self.do_recursively(fn, stat.path, *args, **kwargs)
-        fn(stat.path, *args, **kwargs)
+        if s.isDir:
+          self.do_recursively(fn, s.path, *args, **kwargs)
+        fn(s.path, *args, **kwargs)
       except Exception:
       except Exception:
         pass
         pass
 
 
@@ -908,17 +918,13 @@ class WebHdfs(Hdfs):
 
 
     self.do_as_user(username, self.rename, tmp_file, dst)
     self.do_as_user(username, self.rename, tmp_file, dst)
 
 
-  def upload_v1(self, META, input_data, destination, username):
-    from hadoop.fs.upload import HDFSNewFileUploadHandler  # Circular dependency
-
-    hdfs_upload_handler = HDFSNewFileUploadHandler(destination, username)
-
-    parser = MultiPartParser(META, input_data, [hdfs_upload_handler])
-    return parser.parse()
-
   def filebrowser_action(self):
   def filebrowser_action(self):
     return None
     return None
 
 
+  def get_upload_handler(self, destination_path, overwrite):
+    from hadoop.fs.upload import HDFSNewFileUploadHandler
+    return HDFSNewFileUploadHandler(self, destination_path, overwrite)
+
 
 
 class File(object):
 class File(object):
   """
   """

Beberapa file tidak ditampilkan karena terlalu banyak file yang berubah dalam diff ini